Workshops

The Statistical Consulting Service (SCS) at the University of Connecticut is pleased to announce three workshops for the fall 2018 semester: “Analysis of Missing Data” on Tuesday, September 25; “The Power Analysis” on Friday, October 5; “Multiplicity Adjustment” on Monday, October 22.

Information about past workshops is available here.


Workshop 1: Analysis of Missing Data

Topic: Analysis of Missing Data

Abstract:

Missing data are frequently encountered in all type of datasets. This workshop will provide a brief overview of missing data mechanisms and introduced several statistical methods in analyzing missing data. Illustrative case studies will be presented in analyzing missing data using softwares such as SPSS.

Outline:

  1. Introduction to missing data mechanisms
  2. Overview of statistical methods for analyzing missing data
  3. Dealing with missing data in available statistical software

Location: MCHU (Former Laurel Hall) 301

Date: Tuesday, September 25 5:00PM – 6:00PM

Registration: The first workshop has finished.


Workshop 2: The Power Analysis

Topic: The Power Analysis

Abstract:

Power analysis is the statistical method to determine if the sample size is sufficiently large to detect the treatment effects for a given significance level. This workshop will introduce the fundamentals of power analysis, also cover three different designs for the analysis; Independent two sample t test, One-way ANOVA, and Multiple Regression. For the power analysis during the workshop, G-power will be introduced.

Location: John W. Rowe Center Room 122

Date: Friday, October 5, 11:00AM – 12:00PM

Registration: The second workshop has finished.


Workshop 3: Multiplicity Adjustment

Topic: Multiplicity Adjustment

Abstract:

Controlling the probability of falsely rejecting the null hypothesis is critical when there are multiple, simultaneous hypotheses. The most common method is to control the family-wise error rate (FWER) which guarantees the probability of falsely rejecting at least one of the hypotheses to be within a desired level. As the number of hypotheses to be tested grew larger, the Bonferroni correction is too conservative and lacking power. This leads to the introduction of the False Discovery Rate (FDR) which is defined to be the expected proportion of falsely rejected hypotheses out of all rejected hypotheses. Several modern stepwise methods controlling FDR have been proposed to increase power in the presence of too many hypotheses. We give an overview on the classical and modern multiplicity adjustment methods as well as how to run the procedures in R.

Outline:

  1. Introduction to multiplicity
    • What is multiplicity
    • Familywise Error Rate (FWER)
    • False Discovery Rate (FDR)
  2. Classical methods
    • Bonferroni, Tukey, REGWQ, Sidak, Scheffe, Dunnett, Hsu (one-step)
    • Benjamini-Hochberg (BH), Benjamini-Yekutieli (BY) (stepwise)
    • Examples
  3. Modern multiple comparisons
    • Resampling methods (Permutation, Westfall&Young)
    • Efron (Mixture modelling)
  4. R code examples

Location: MCHU (Former Laurel Hall) 301

Date: Monday, October 22, 11:00AM – 12:00PM

Registration: The Third workshop has finished.