Coffee will be served at 3:30PM in room 326 for each Wednesday colloquium and at 10:30AM in room 326 for each 11AM and 10AM for each 9AM colloquium. TAs are responsible for set-up according to the schedule.
Information about past colloquia is available here.
|Wednesday, January 18, 4:00pm||Fanfang Wang, University of Connecticut||On the Estimation of Integrated Volatility in the Frequency Domain||AUST 105|
|Wednesday, January 25, 4:00pm||Victor Hugo Lachos Davila, University of Connecticut||Heavy-tailed longitudinal regression models for censored data: A likelihood based perspective||AUST 105|
|Wednesday, February 1, 4:00pm||AUST 105|
|Monday, February 6, 11:00am||Yuwen Gu, School of Statistics, University of Minnesota||High-dimensional Generalizations of Asymmetric Least Squares and Their Applications||AUST 344|
|Wednesday, February 8, 4:00pm||AUST 105|
|Friday, February 10, 11:00am||Jon Steingrimsson, John Hopkins Bloomberg School of Public Health||Doubly Robust Survival Trees and Forests||AUST 105|
|Monday, February 13, 9:00am||Nhat Ho, University of Michigan||Parameter Estimation and Multilevel Clustering with Mixture and Hierarchical Models||AUST 344|
|Wednesday, February 15, 4:00pm||Vishesh Karwa, Harvard University||Differentially Private Statistical Inference||AUST 105|
|Friday, February 17, 11:00am||HaiYing Wang, University of New Hampshire||Information-Based Optimal Subdata Selection for Big Data Linear Regression||AUST 105|
|Monday, February 20, 9:00am||Kuang-Yao Lee, Yale School of Public Health||On Additive Conditional Independence for High-Dimensional Statistical Analysis||AUST 344|
|Wednesday, February 22, 10:00am||Jon Steingrimsson, John Hopkins Bloomberg School of Public Health|| Doubly Robust Survival Trees and Forests
*Rescheduled from Feb. 10
|Wednesday, March 1, 4:00pm||AUST 105|
|Wednesday, March 8, 4:00pm
Joint event with School of Education
|Dan McNeish, University of North Carolina, Chapel Hill||Is Bayes a Solution for Small Samples?||Gentry 144|
|Wednesday, March 22, 4:00pm||Gongjun Xu, University of Michigan||AUST 105|
|Wednesday, March 29, 4:00pm||Andrea Troxel, NYU School of Medicine||AUST 105|
|Wednesday, April 5, 4:00pm||AUST 105|
|Wednesday, April 12, 4:00pm||Francesca Dominici, Harvard University||AUST 105|
|Wednesday, April 19, 4:00pm||Bani Mallick, Texas A&M University||AUST 105|
|Wednesday, April 26, 4:00pm||Gen Li, Columbia University||A General Framework for the Association Analysis of Heterogeneous Data||AUST 105|
Colloquium is organized by Professor Xiaojing Wang.
Fanfang Wang; University of Connecticut
On the Estimation of Integrated Volatility in the Frequency Domain
January 18, 2017
This talk discusses frequency-domain analysis of integrated volatility using intraday information. By exploring the informational content of the power spectrum of ultra-high-frequency data, the speaker would consider a realized periodogram-based estimator for the ex-post price variation. When intraday equity prices are sampled at ultra-high frequency and are contaminated with market microstructure noise, the proposed estimator behaves like a low-pass filter: it removes the noise by filtering out high frequency periodograms and converts the high frequency data into low frequency periodograms. Numerical study shows that the proposed estimator is insensitive to the choice of sampling frequency and it is competitive with other existing noise-corrected volatility measures.
Victor Hugo Lachos Davila; University of Connecticut
Heavy-tailed longitudinal regression models for censored data: A likelihood based perspective
January 25, 2017
HIV RNA viral load measures are often subjected to some upper and lower detection limits depending on the quantification assays. Hence, the responses are either left or right censored. Moreover, it is quite common to observe viral load measurements collected irregularly over time. A complication arises when these continuous repeated measures have a heavy-tailed behaviour. For such data structures, we propose a robust nonlinear censored regression model based on the scale mixtures of normal (SMN) distributions. To take into account the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is considered. A stochastic approximation of the EM (SAEM) algorithm is developed to obtain the maximum likelihood estimates of the model parameters. The main advantage of this new procedure allows us to estimate the parameters of interest and evaluate the log-likelihood function in an easy and fast way. Furthermore, the standard errors of the fixed effects and predictions of unobservable values of the response can be obtained as a by-product. The practical utility of the proposed method is exemplified using both simulated and real data.
Yuwen Gu; School of Statistics, University of Minnesota
High-dimensional Generalizations of Asymmetric Least Squares and Their Applications
February 6, 2017
Asymmetric least squares (ALS) regression is a convenient and effective method for summarizing the conditional distribution of a response variable given the covariates. Recent years have seen a growing interest in ALS amongst statisticians, biostatisticians, econometricians and financial analysts. However, existing work on ALS only considers the traditional low-dimension- and-large- sample setting. In this talk, we systematically explore the Sparse Asymmetric LEast Squares (SALES) regression under high dimensionality. We show the complete theory using penalties such as lasso, MCP and SCAD. A unified efficient algorithm for fitting SALES is proposed and is shown to have a guaranteed linear convergence.
An important application of SALES is to detect heteroscedasticity in high-dimensional data and from that perspective it provides a computationally friendlier alternative to sparse quantile regression (SQR). However, when the goal is to separate the set of significant variables for the mean and that for the standard deviation of the conditional distribution, both SALES and SQR can fail when overlapping variables exist. To that end, we further propose a Coupled Sparse Asymmetric LEast Squares (COSALES) regression. We show that COSALES can consistently identify the two important sets of significant variables for the mean and standard deviation simultaneously, even when the two sets have overlaps.
Jon Steingrimsson; John Hopkins Bloomberg School of Public Health
Doubly Robust Survival Trees and Forests
February 10, 2017
February 22, 2017
Survival trees use recursive partitioning to separate patients into distinct risk groups when some observations are right-censored. Survival forests average multiple survival trees creating more flexible prediction models. In the absence of censoring, the algorithms rely heavily on the choice of loss function used in the decision making process. Motivated by semiparametric efficiency theory, we replace the loss function used in the absence of censoring by doubly robust loss functions. We derive properties of these loss functions and show how the doubly robust survival trees and forest algorithms can be implemented using a certain form of response transformation. Furthermore, we discuss practical issues related to the implementation of the algorithms. The performance of the resulting survival trees and forests is evaluated through simulation studies and analyzing data on death from myocardial infarction.
Nhat Ho; University of Michigan
Parameter Estimation and Multilevel Clustering with Mixture and Hierarchical Models
February 13, 2017
This talk addresses statistical inference with mixture and hierarchical models: efficiency of parameter estimation in finite mixtures, and scalable clustering of multilevel structured data.
It is well-known that due to weak identifiability and singularity structures of latent variable models’ parameter space, the convergence behaviors of parameter estimation procedures for mixture models remain poorly understood. In the first part of the talk, we describe a general framework for characterizing impacts of weak identifiability and singularity structures on the convergence behaviors of the maximum likelihood estimator in finite mixture models. This allows us to resolve several open questions regarding popular models such as Gaussian and Gamma mixtures, as well as to explicate the behaviors of complex models such as mixtures of skew normal distributions.
In the second part of the talk, we address a clustering problem with multilevel structured data, with the goal of simultaneously clustering a collection of data groups and partitioning the data in each group. By exploiting optimal transport distance as a natural metric for distributions and a collection of distributions, we propose an optimization formulation that allows to discover the multilevel clustering structures in grouped data in an efficient way. We illustrate the performance of our clustering method in a number of application domains, including computer vision.
Vishesh Karwa; Harvard University
Differentially Private Statistical Inference
February 15, 2017
Differential privacy has emerged as a powerful tool to reason rigorously about privacy and confidentiality issues. In its purest form, differential privacy limits direct access to raw data, allowing interaction only through a noisy interface. This requires new approaches to statistical inference. In this talk, I will introduce the definition of differential privacy, followed by some of its key properties. I will then present a framework for performing statistical inference under the constraint of differential privacy and its connections to measurement error and missing data models, with several examples. I will end with a demonstration of a differentially private interface to access data, developed as a part of ongoing collaboration between computer scientists, political scientists, and lawyers at Harvard.
HaiYing Wang; University of Michigan
Information-Based Optimal Subdata Selection for Big Data Linear Regression
February 17, 2017
Extraordinary amounts of data are being produced in many branches of science. Proven statistical methods are no longer applicable with extraordinary large data sets due to computational limitations. A critical step in Big Data analysis is data reduction. Existing investigations in the context of linear regression focus on subsampling-based methods. However, not only is this approach prone to sampling errors, it also leads to a covariance matrix of the estimators that is typically bounded from below by a term that is of the order of the inverse of the subdata size. We propose a novel approach, termed information-based optimal subdata selection (IBOSS). Compared to existing methods, the IBOSS approach has the following advantages: (i) it is significantly faster; (ii) it is suitable for distributed parallel computing; (iii) the variances of the slope parameter estimators converge to 0 as the full data size increases even if the subdata size is fixed, i.e., the convergence rate depends on the full data size; (iv) data analysis for IBOSS subdata is straightforward and the sampling distribution of an IBOSS estimator is easy to assess. Theoretical results and extensive simulations demonstrate that the IBOSS approach is superior to subsampling-based methods, sometimes by orders of magnitude. The advantages of the new approach are also illustrated through analysis of real data.
Kuang-Yao Lee; Yale School of Public Health
On Additive Conditional Independence for High-Dimensional Statistical Analysis
February 20, 2017
With the advance of high-throughput technologies, massive and complex data are routinely collected and these data need to be processed and analyzed differently from conventional data. In this presentation I will discuss a nascent concept for analyzing big data — additive conditional independence (ACI) — a three-way statistical relation that shares many similarities with conditional independence. However, its nonparametric characterization does not involve multivariate kernel, which enjoys the flexibility of nonparametric estimators but avoids the curse of dimensionality in high-dimensional settings. We facilitate the implementation of ACI via a case study on nonparametric graphical models, and describe a general framework for adopting ACI to a broader scope. Additionally, to emphasize the increasing impact of ACI we also introduce several recent developments under various statistical settings. We investigate the properties of the proposed estimators through both theoretical and simulation analyses. The usefulness of our procedures is also demonstrated through an application to gene regulatory network (GRN) inference using a DREAM Challenge dataset. This is joint work with Bing Li (Penn State), Hongyu Zhao (Yale), Lexin Li (UC Berkeley) and Tianqi Liu (Yale).
Dan McNeish; University of North Carolina, Chapel Hill
Is Bayes a Solution for Small Samples?
March 8, 2017
In educational research, small sample data are extremely common, especially when data have a hierarchical structure. Recent meta-analyses have found that between 20% and 50% of studies are classified as having small samples. As barriers to software implementation continue to fall, Bayesian methods are becoming an increasingly popular method by which to accommodate small sample data and such a strategy is often suggested. Although true that Bayesian methods have advantages over frequentist methods with small sample data, these advantages are not acquired automatically. This talk discusses how typical applications of Bayesian methods in empirical studies are not sufficient to effectively capitalize on small sample advantages and can actually exacerbate small sample issues known to affect frequentist methods. The relevance of small sample methods for emerging methodological developments is discussed. Growth models and multilevel mediation are shown as examples.
Gen Li; Columbia University
A General Framework for the Association Analysis of Heterogeneous Data
April 26, 2017
Multivariate association analysis is of primary interest in many applications. Despite the prevalence of high-dimensional and non-Gaussian data (such as count-valued or binary), most existing methods only apply to low-dimensional datasets with continuous measurements. Motivated by the Computer Audition Lab 500-song (CAL500) music annotation study, we develop a new framework for the association analysis of two sets of high-dimensional and heterogeneous (continuous/binary/count) data. We model heterogeneous random variables using exponential family distributions, and exploit a structured decomposition of the underlying natural parameter matrices to identify shared and individual patterns for two datasets. We also introduce a new measure of the strength of association, and a permutation-based procedure to test its significance. An alternating iteratively reweighted least squares algorithm is devised for model fitting, and several variants are developed to expedite computation and achieve variable selection. The application to the CAL500 data sheds light on the relationship between acoustic features and semantic annotations, and provides an effective means for automatic annotation and music retrieval.