UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Median loss analysis and its application to model selection Yu, Chi Wai

Abstract

In this thesis, we propose a median-loss-based procedure for inference. The optimal estimators under this criterion often have desirable properties. For instance, they have good resistance to outliers and are resistant to the specific loss used to form them. In the Bayesian framework, we establish the asymptotics of median-loss-based Bayes estimators. It turns out that the median-based Bayes estimator has a root-n rate of convergence and is asymptotically normal. We also give a simple way to compute this Bayesian estimator. In regression problems, we compare the median-based Bayes estimator with two other estimators. One is the Frequentist version of our median-loss-minimizing estimator, which is exactly the least median of squares (LMS) estimator, and the other one is two-sided least trimmed squares (LTS) estimator. This comparison is natural because the LMS estimator is median-based but only has cubic-root-n convergence, while the 2-sided LTS is not median-based but it has root-n convergence. We show that our median-based Bayes estimator is a good tradeoff between the LMS and 2-sided LTS estimators. For model selection problems, we propose a median analog of the usual cross validation procedure. In the context of linear models, we present simulation result to compare the performance of cross validation (CV) and median cross validation (MCV). Our results show that when the error terms are from a heavy tailed distribution or are from the normal distribution with small values of the unknown parameters, MCV works better CV does in terms of the probability that chooses the true model. By contrast, when the error terms are from the normal distribution and the values of the unknown parameters are large, CV outperforms MCV.

Item Media

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International