Dienstag, 18. Oktober 2011, 11-12 Uhr - Raum: W9-109
Dipl.-Kff. Nina Westerheide
Universität Bielefeld
Flexible Modelling of Unemployment Duration Using Spline-Based Functional Hazard Models
The intention of this talk is to demonstrate the flexibility and capacity of penalized spline smoothing as estimation routine for modelling duration time data. The statistical model being used is built upon the hazard rate where the smooth covariate effects are allowed to vary with time. The aim of this contribution is to demonstrate how to make use of available software to easily fit rather complex functional duration time models after some simple data management. The non-proportional hazard model is applied for two examples of use in different settings considering unemployment data. First, the unemployment behaviour in Germany and the UK between 1995 and 2005 is explored based on data from national panel studies namely the German Socio-Economic Panel and the British Household Panel Survey. Second, a non-proportional hazard model with competing risks is employed to investigate dynamic covariate effects and differences between competing job markets depending on the distance between former and recent working place. For this purpose a massive data base, the Scientific Use File ‘Regional File 1975 - 2004’ of the IAB Employment Samples from the German Federal Employment Agency, is used to analyse the unemployment behaviour in Germany between 2000 and 2004.
Dienstag, 15. November 2011, 11-12 Uhr - Raum: W9-109
Dipl.-Vw. Verena Meier und Dr. Joachim Schnurbus
Universität Bielefeld
Convergence of the high-skilled in German regions: clubs, nonlinearities, and spatial patterns
We apply a club-based growth convergence framework, which allows to capture club-specific heterogeneity and nonlinearity simultaneously with a nonparametric two-step procedure. We extend this approach by an analysis and tests of the spatial structure of growth convergence models to assess the spatio-temporal diffusion of high-skilled employees in Germany. We find that the club variable captures nearly all the spatial information. Thus there is no empirical evidence in favor of spatial (augmented) Solow models for German data. Vice versa neglecting the club information leads to a considerable degree of spatial association in the model. We check the robustness of our findings by analyzing further data which have been found to be generated by spatial Solow models. The evidence is clearly in favor of our findings that the latter spatial association structures are already captured by the flexibility of convergence paths on the club level.
Dienstag, 29. November 2011, 11-12 Uhr - Raum: W9-109
Dipl.-Math. Nadeshda Kaufmann
Universität Bielefeld
Spatial Statistics and Spatial Econometrics for Lattice Data
The interest in spatial modeling increases enormously in recent times. Spatial Statistics as well as Spatial Econometrics fill an impressive body of literature, clearly demonstrating the importance of the corresponding methods. The intention of this talk is to shed some light on the nature and relevance of the different objectives and modeling aspects of this fields from a practitioners point of view. The discussion thereby will be restricted to the data collected on a lattice or grid. The comparison of different modeling approaches for continuous responses will be demonstrated through a Monte Carlo simulation and a real data example.
Dienstag, 13. Dezember 2011, 11-12 Uhr - Raum: W9-109
apl. Prof. Dr. Peter Wolf
Universität Bielefeld
RELAX von A bis Z
Im Mittelpunkt moderner Arbeitsumgebungen [auch von Statistikern] stehen heutzutage verschiedenste IT-Geräte wie PCs, Laptops bis hin zu leistungsstarken Serverrechnern. Diese Multifunktionsgeräte erlauben Datenanalysen, Schätzungen und Simulationen, dienen aber auch der alltäglichen Arbeit, der Texterstellung und unterstützen insbesondere die Konstruktion neuer Verfahren. Doch sind diese Werkzeuge so, wie wir es uns wünschen? Und lassen sich die Werkzeuge so einsetzen, wie es für die relevanten Tätigkeiten förderlich ist? Ausgehend von solchen Fragen werden in dem Vortrag eine Reihe von Aspekten zur Arbeitsumgebung RELAX aufgezeigt, bei deren Entwicklung der Anwender mit seinen Arbeitsprozessen ins Zentrum gestellt wurde.
Dienstag, 10. Januar 2012, 11-12 Uhr - Raum: W9-109
M.Sc. Florian Brezina
Universität Regensburg
A bootstrap unit root test for panels with common factors
This article presents a method to analyze nonstationarity in a cross-section of time series. A common factor structure is employed to allow for common shocks in the panel and is estimated by principal components. Although consistent, augmented Dickey-Fuller tests applied to estimated common and idiosyncratic components often behave poorly in finite samples. Bootstrap resampling techniques known from time series analysis are extended to the panel context to lessen size distortions. By using Monte Carlo experiments the bootstrap procedure is shown to provide substantial improvement in test size accuracy while maintaining power. Since improvement is especially pronounced for short time series and tests against trend-stationarity, the bootstrap extension might be helpful in many applications.
Dipl.-Vw. Roland Weigand
Universität Regensburg
Fractional Common Factors for Modelling and Forecasting Multivariate Realized Volatility
Multivariate volatility modelling and forecasting is essential for portfolio optimization, risk management and a better understanding of financial markets. Recent work documents successful use of intraday data for these purposes. In this contribution we propose a long memory factor approach to describe the dynamic behavior of realized covariance matrices. We evaluate the benefits of different data transforms such as the matrix logarithm and the Cholesky decomposition which ensure positive definite matrix forecasts and have been recently applied in the literature. The transformed series display long memory dynamics and apparent co-movement. Evidence of fractional cointegration motivates the use of a factor model with fractional common components. This structure, along with sparsely parameterized short memory dynamics, helps to mitigate the curse of dimensionality which is imminent even for a moderate number of assets. With a truncated (so called `type II') formulation of the fractional integration operators the model can easily be cast in state space form so that computationally feasible parameter estimation via the EM-algorithm, factor assessment and forecasts make use of the standard Kalman filter and smoother. The methods are applied to a daily sample of six liquid US stocks and the usefulness of our approach is illustrated by an out-of-sample forecast comparison.
Dienstag, 24. Januar 2012, 11-12 Uhr - Raum: W9-109
Prof. Dr. Harry Haupt und Dr. Joachim Schnurbus
Universität Bielefeld
Pricing of houses and their characteristics using multiple nonparametric regression
"What the hedonic approach attempted was to provide a tool for
[1] estimating missing prices ... of particular bundles not observed, ...
[2] detection of the relevant characteristics of a commodity and ...
[3] estimation of their marginal market valuation." (Griliches, 1990).
This paper pursues those three issues by reflecting the recent emphasis on the necessity of nonlinear methods for modeling hedonic price functions. The resulting statistical problem of model selection and validation of such methods may pose a problem as respective statistical criteria may not coincide in a tentative parsimonious model which also admits an economically sound interpretation. We apply multiple nonparametric regression and cross-validation for estimation, model validation and selection to analyze an urban hedonic house price function.