Online Interdisciplinary Seminars on SM-SBR (Current Seminars)

Online Interdisciplinary Seminars on Statistical Methodology for Social and Behavioral Research

The online interdisciplinary seminars on statistical methodology for social and behavioral research is supported by the department of statistics and the department of education psychology in the University of Connecticut (UCONN), the Statistical and Applied Mathematical Sciences Institute (SAMSI) and the New England Statistical Society (NESS). The seminar is held online via WebEx and anyone in the world can join and it is scheduled monthly on Friday. The aims of the seminar are to promote the connection between the statistics and social/behavioral science communities and to encourage more graduate students to participate in the interdisciplinary research.

INFORMATION ABOUT PAST SEMINARS IS AVAILABLE HERE.

Date
Speaker
Affiliation
Title
01/28/2022, 3 PM-5 PM Andrew Ho Harvard University Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times
03/04/2022, 3 PM-5 PM Donald Hedeker University of Chicago Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data
03/25/2022, 3 PM-5 PM Elizabeth Stuart  Johns Hopkins Bloomberg School of Public Health Combining Experimental and Population Data to Estimate Population Treatment Effects
04/29/2022 Luke Keele University of Pennsylvania Approximate Balancing Weights for Clustered Observational Study Designs
05/20/2022 Katherine Masyn Georgia State University

For announcements and WebEx live streaming links, please contact Tracy Burke (tracy.burke@uconn.edu).

For questions related to the seminars, please feel free to contact organizers
(Prof. Xiaojing Wang (xiaojing.wang@uconn.edu) and Prof. Betsy McCoach (betsy.mccoach@uconn.edu) ).


Dr. Andrew Ho, Harvard University

Friday, 1/28/2022, 3pm

 Test Validation for a Crisis: Five Practical Heuristics for the Best and Worst of Times

The COVID-19 pandemic has raised debate about the place of education and testing in a hierarchy of needs. What do tests tell us that other measures do not? Is testing worth the time? Do tests expose or exacerbate inequality? The academic consensus in the open-access AERA/APA/NCME Standards has not seemed to help proponents and critics of tests reach common ground. I propose five heuristics for test validation and demonstrate their usefulness for navigating test policy and test use in a time of crisis: 1) A “four quadrants” framework for purposes of educational tests. 2) The “Five Cs,” a mnemonic for the five types of validity evidence in the Standards. 3) “RTQ,” a mantra reminding test users to read items. 4) The “3 Ws,” a user-first perspective on testing. And 5) the “Two A’s Tradeoff” between Assets and Accountability. I illustrate application of these heuristics to the challenge of reporting aggregate-level test scores when populations and testing conditions change as they have over the pandemic (e.g., An, Ho, & Davis, in press; Ho, 2021). I define and discuss these heuristics in the hope that they increase consensus and improve test use in the best and worst of times.
Bio: Dr. Andrew Ho is the Charles William Eliot Professor of Education at the Harvard Graduate School of Education. He is a psychometrician whose research aims to improve the design, use, and interpretation of test scores in educational policy and practice. Dr. Ho is known for his research documenting the misuse of proficiency-based statistics in state and federal policy analysis. He has also clarified properties of student growth models for both technical and general audiences. His scholarship advocates for designing evaluative metrics to achieve multiple criteria: metrics must be accurate, but also transparent to target audiences and resistant to inflation under high stakes. Dr. Ho is a director of the Carnegie Foundation for the Advancement of Teaching and has served on the governing boards for NCME and NAEP. He has chaired the research committee for the Vice Provost for Advances in Learning at Harvard, which governed research on “massive open online courses”. He holds a Ph.D. in Educational Psychology and M.S. in Statistics from Stanford University. Before graduate school, he taught middle school creative writing in his hometown of Honolulu, Hawaii, and high school Physics and AP Physics in Ojai, California.

Full WEBEX Info for this talk: (Friday, 1/28/2022, 3pm)

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=me0f80ec702d5508cf83ae6a23183fc3d

Meeting number: 2622 217 7365

Password: RMMESTAT

Join by video system

Dial 26222177365@uconn-cmr.webex.com
You can also dial 173.243.2.68 and enter your meeting number.

Join by phone

+1-415-655-0002 US Toll

Access code: 2622 217 7365


Dr. Donald Hedeker, University of Chicago

Friday, 3/04/2022, 3pm

Shared Parameter Mixed-Effects Location Scale Models for Intensive Longitudinal Data

Intensive longitudinal data are increasingly encountered in many research areas. For example, ecological momentary assessment (EMA) and/or mobile health (mHealth) methods are often used to study subjective experiences within changing environmental contexts. In these studies, up to 30 or 40 observations are usually obtained for each subject over a period of a week or so, allowing one to characterize a subject’s mean and variance and specify models for both. In this presentation, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances and also extend the statistical model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. The random effects are then shared in a modeling of future smoking levels. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.
Bio: Dr. Donald Hedeker’s chief expertise is in the development and use of advanced statistical methods for clustered and longitudinal data, with particular emphasis on mixed-effects models. He is the primary author of several freeware computer programs for mixed-effects analysis. With Robert Gibbons, Dr. Hedeker is the author of the text “Longitudinal Data Analysis,” published by Wiley in 2006. More recently, he has developed methods and software for analysis of intensive longitudinal data, which are data with many measurements over time, often collected using mobile devices and/or the internet. Such data are increasingly obtained by researchers in many research areas, for example in the areas of mobile health (mHealth) and ecological momentary assessment (EMA) studies. Dr. Hedeker is an associate editor for Statistics in Medicine, an elected member of the Society of Multivariate Experimental Psychology and the International Statistical Institute, and a Fellow of the American Statistical Association, receiving the Long-Term Excellence Award from ASA’s Health Policy Statistics Section in 2015. Dr. Hedeker earned his PhD in Quantitative Psychology and BA in Economics from the University of Chicago.

Full WEBEX Info for this talk: (Friday, 3/04/2022, 3pm)

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m6944095dfb2736dba214a9c6f6397805

Meeting number: 2621 074 6486

Password: RMMESTAT

Join by video system

Dial 26210746486@uconn-cmr.webex.com
You can also dial 173.243.2.68 and enter your meeting number.

Join by phone

+1-415-655-0002 US Toll

Access code: 2621 074 6486


Dr. Elizabeth Stuart, Johns Hopkins Bloomberg School of Public Health

Friday, 3/25/2022, 3pm

Combining Experimental and Population Data to Estimate Population Treatment Effects

With increasing attention being paid to the relevance of studies for real-world practice (especially in comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals in the trial, they do not necessarily inform what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. In addition, new “big data” resources offer the opportunity to utilize data on broad target populations. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions and methods to assess robustness to violation of those assumptions will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.
Dr. Elizabeth StuartBio: Dr. Elizabeth A. Stuart is Bloomberg Professor of American Health in the Department of Mental Health at the Johns Hopkins Bloomberg School of Public Health, with joint appointments in the Department of Biostatistics and the Department of Health Policy and Management. She received her Ph.D. in statistics in 2004 from Harvard University and is a Fellow of the American Statistical Association (ASA) and the American Association for the Advancement of Science (AAAS). She has received research funding for her work from the National Science Foundation, the Institute of Education Sciences, the WT Grant Foundation, and the National Institutes of Health and has served on advisory panels for the National Academy of Sciences, the US Department of Education, and the Patient Centered Outcomes Research Institute. She received the midcareer award from the Health Policy Statistics Section of the ASA, the Gertrude Cox Award for applied statistics, Harvard University’s Myrto Lefkopoulou Award for excellence in Biostatistics, and the inaugural Society for Epidemiologic Research Marshall Joffe Epidemiologic Methods award.

Full WEBEX Info for this talk: (Friday, 3/25/2022, 3pm)

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=mb26cc940795502d8ae9ff7e274d435bb

Meeting number: 2620 779 2220

Password: RMMESTAT

Join by video system

Dial 26207792220@uconn-cmr.webex.com
You can also dial 173.243.2.68 and enter your meeting number.

Join by phone

+1-415-655-0002 US Toll

Access code: 2620 779 2220


Dr. Luke Keele, University of Pennsylvania

Friday, 4/29/2022, 3pm

Approximate Balancing Weights for Clustered Observational Study Designs

In a clustered observational study, a treatment is assigned to groups and all units within the group are exposed to the treatment. Clustered observational studies are common in education where treatments are given to all students within some schools but withheld from all students in other schools. Clustered observational studies require specialized methods to adjust for observed confounders. Extant work has developed specialized matching methods that take key elements of clustered treatment assignment into account. Here, we develop a new method for statistical adjustment in clustered observational studies using approximate balancing weights. An approach based on approximate balancing weights improves on extant matching methods in several ways. First, our methods highlight the possible need to account for differential selection into clusters. Second, we can automatically balance interactions between unit level and cluster level covariates. Third, we can also balance high moments on key cluster level covariates. We also outline an overlap weights approach for cases where common support across treated and control clusters is poor. We introduce an augmented estimator that accounts for outcome information. We show that our approach has dual representation as an inverse propensity score weighting estimator based on a hierarchical propensity score model. We apply this algorithm to assess a school-based intervention through which students in treated schools were exposed to a new reading program during summer school. Overall, we find that balancing weights tend to produce superior balance relative to extant matching methods. Moreover, an approximate balancing weight approach tends to require less input from the user to achieve high levels of balance.
Bio: Dr. Luke Keele (Ph.D., University of North Carolina, Chapel Hill, 2003) is currently an Associate Professor at the University of Pennsylvania with joint appointments in Surgery and Biostatistics. Professor Keele specializes in research on applied statistics with a focus on causal inference, design-based methods, matching, natural experiments, and instrumental variables. He also conducts research on topics in educational program evaluation, election administration, and health services research. He has published articles in the Journal of the American Statistical Association, Annals of Applied Statistics, Journal of the Royal Statistical Society, Series A, The American Statistician, American Political Science Review, Political Analysis, and Psychological Methods.

Full WEBEX Info for this talk: (Friday, 4/29/2022, 3pm)

https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m35b82d4dc6d3e77536aa48390a02485b

Meeting number: 2622 749 2408

Password: RMMESTAT

Join by video system

Dial 26227492408@uconn-cmr.webex.com
You can also dial 173.243.2.68 and enter your meeting number.

Join by phone

+1-415-655-0002 US Toll

Access code: 2622 749 2408