top of page

WINTER 2021

Date: September 30, 2020 13-14h EST

Speaker: Dr. Holly P. O’Rourke
Assistant Professor of Measurement and Statistical Analysis
Family and Human Development Program
T. Denny Sanford School of Social and Family Dynamics
Arizona State University

 

Title: Mediation with Zero-Inflated Count Outcomes for Substance Use Data: Negative Binomial, Poisson, and Hurdle Models

 

Abstract:
Many studies of substance use are concerned with examining mechanisms for reducing substance use behaviors in addition to ultimate outcomes. Researchers often use mediation analysis to achieve this aim. Additionally, a common issue in substance use research is the presence of many zeroes in a count outcome variable, such as number of drinks per week or number of substances used in the past month. There are developed mediation methods for categorical outcomes, including continuous, count, and binary outcomes (Coxe & MacKinnon, 2010; Geldhof, Anthony, Selig, & Mendez-Luck, 2018; MacKinnon, 2008). However, less research has examined mediators of zero-inflated (ZI) count outcomes, and those approaches are not easily extended to ZI count models, which model zeroes and counts separately and split the mediated effect into two parts. This talk will describe the process of mediation analysis for ZI count outcomes, and call attention to the specific issues that arise when count outcomes are ZI. A method is described to assess mediation for ZI count outcomes that is applicable for a variety of generalized linear models (GzLMs), including ZI Poisson (ZIP), ZI negative binomial (ZINB), and hurdle models. Once the model is chosen, mediated effects can be calculated for and tests of mediation can be conducted, including bootstrapped confidence intervals of the conditional mediated effects. The differences between mediation for ZI and non-ZI models are highlighted using several examples of substance use data. After illustrating how a recent mediation method for zero-inflated counts can be applied to prevention data, future directions for mediation with zero-inflated count outcomes are discussed, including extensions to longitudinal models. 

 

 

 

Date: October 28, 2020 13-14h EST

Speaker: Dr. Sarfaraz Serang

Assistant Professor - Quantitative Psychology Program

Emma Eccles Jones College of Education and Human Services

Utah State University

Title: Mplus Trees and Applications to COVID-19

 

Abstract: 

The fusion of ideas from the structural equation modeling and data mining literatures has allowed for the development of new techniques for theory-guided exploration. This presentation covers Mplus Trees, an approach that allows for the fitting of complex structural equation models within the nodes of a decision tree. After introducing the method, I will discuss where it fits in relation to other methods with respect to how it structures heterogeneity, as well as how it can be used to understand causal relationships. I will conclude with an analysis of COVID-19 data showing how changes in mobility post-COVID can be causally attributed in part to political alignment, and the associated policy implications.

Date: November 25, 2020 13-14h EST

Speaker: Dr. Lisa DeBruine

Institute of Neuroscience & Psychology
University of Glasgow

 

Title: Increasing rigour with machine-readable study descriptions

 

Abstract:

The increasingly digital workflow in science has made it possible to share almost all aspects of the research cycle, from pre-registered analysis plans and study materials to the data and analysis code that produce the reported results. Although the growing availability of research output is a positive development, most of this digital information is in a format that makes it difficult to find, access, and reuse. A major barrier is the lack of a framework to concisely describe every component of research in a machine-readable format: A grammar of science. I will discuss what problems machine-readable study descriptions might solve, potential use cases, and demonstrate the feasibility of a prototype machine-readable study description in a real-life example using the R package scienceverse.

 

Resources:

 - preprint: Lakens, D., & DeBruine, L. M. (2020, January 27). Improving Transparency, Falsifiability, and Rigour by Making Hypothesis Tests Machine Readable. https://doi.org/10.31234/osf.io/5xcda

- R package with vignettes: https://scienceverse.github.io/scienceverse/ 

Date: December 16, 2020 13-14h EST

Speaker: Dr. Yves Rosseel

Department of Data Analysis

Ghent University

 

Title: Small sample solutions for SEM

 

Abstract:

Structural equation modeling (SEM) is a widely used statistical technique for studying relationships among multivariate data. Unfortunately, when the sample size is small, several problems may arise: nonconvergence, bias, and nonadmissible solutions (e.g., negative variances). A popular solution, often suggested in the literature, is to switch to a Bayesian approach. However, in this presentation, I will stay in the frequentist framework and present two solutions that may fix many of the current problems. 

A first solution is merely a computational trick. Instead of using unconstrained optimization (using, for example, quasi-Newton methods), one could impose simple lower and upper bounds on a selection of model parameters during optimization. By using well chosen bounds that are just outside the admissible parameter space, we are able to stabilize regular ML estimation in (very) small samples. 

A second solution is the so-called structural-after-measurement (SAM) approach. In this approach, estimation proceeds in several steps. In a first step, only parameters related to the measurement part of the model are estimated. In a second step, parameters related to the structural part (only) are estimated. Several implementations of this old idea will be presented. A distinction will be made between local and global SAM, and it will be suggested that various alternative estimators (including non-iterative estimators) could be used for the different model parts. It turns out that this approach is not only effective in small samples, but it is also robust against many types of model misspecification. Many existing alternatives (factor score regression with Croon corrections, sum scores with fixed reliabilities, model-implied instrumental variables estimation, Fuller's method, ...) turn out to be special cases of this general framework. Finally, I will briefly demonstrate how these solutions can be used in the R package lavaan.
 

FALL 2020

Date: February 22, 2020 13-14h EST

Speaker: Dr. Mariola Moeyaert 

Associate Professor of Statistics
Educational Psychology and Methodology
University at Albany – State University of New York

 

 

Title: Multilevel Meta-Analysis of Single-Case Experimental Data: Recent Methodological Developments and Innovations

 

Abstract:
Due to the increased interest in establishing an evidence base for interventions, along with the difficulties encountered in large scale experimental studies, there has been a substantial increase in the use of single-case experimental designs. Single-case experimental designs allow researchers to investigate the effectiveness of an interventions at the individual level, and evaluate the evolution of the effectiveness over time. To enhance generalizability, researchers replicate across subjects (i.e., cases). Over the last decade my research team proposed, developed and promoted the use of multilevel models to synthesize data across subjects, allowing for estimation of the mean intervention effect, variation in effects over subjects and studies, and subject and study characteristic moderator effects. For instance, multilevel models can handle unstandardized and standardized raw data or effect sizes, linear and nonlinear time trends, intervention effects on time trends, autocorrelation and other complex covariance structures at each level. 

With the support of three IES grants, my research team and I have been continuously evaluating and further developing the multilevel approach. During the quant seminar, I will be presenting some of the latest developments and innovations we have been working on and published, such as (1) analysis of non-continuous outcome variables (count data), (2) dealing with dependencies in intervention effects, (3) comparing one-stage versus two-stage meta-analytic methods, (4) evaluating bias adjustments to synthesizes standardized data, (5) evaluating the impact of response-guided baseline phase extensions, and (6) Comparing of within- and between-series effect estimates.
 

 

 

Date: February 15, 2021 14-15h EST

Speaker: Dr. Terrence Jorgensen

Assistant Professor, Methods and Statistics
Research Institute for Child Development and Education, the University of Amsterdam

Title: Building dyadic network models for dyad-constant dependent variables

Abstract: 

Dyadic network data occur when each member in a group provides data about each other member in the group (e.g., how much they like each other person).  Such data have a complex nesting structure, such that bivariate responses (e.g., Person A's liking of B and vice versa) are dependent upon out-going and in-coming random effects that are correlated within individuals, as well as correlated residuals within dyads.  Dyadic network models for such data include the social relations model (SRM; Kenny, 1994) and the p2 and j2 models (Zijlstra, 2017; Zijlstra et al., 2007, 2008), but I have seen no application or generalization to accommodate a rarely discussed type of variable from this framework: variables that are constant within a dyad.  Dyad-constant variables could include background variables such as whether a dyad is same- or opposite sex or how many years two friends have known each other, which require no special modification to use a predictors (Jorgensen et al., 2018).  But they could also be outcomes, such as the difference in a married couple's relationship satisfaction or the similarity in symptoms of a (set of) psychological disorder(s).  I explore how such dyad-constant outcomes can be modeled, both cross-sectionally and longitudinally, demonstrating how to estimate parameters using the Bayesian "Stan" package on a data set from a clinic for eating-disorder patients.

Date: March 22, 2021 13-14h EST

Speaker: Dr. Mijke Rhemtulla

Associate Professor
Department of Psychology

University of California Davis

 

Title: TBA

Abstract:

TBA

Date: April 19, 2021 13-14h EST

Speaker: Dr. Jeremy Biesanz

Associate Professor

Department of Psychology

The University of British Columbia

 

Title: TBA

 

Abstract:

TBA

 

bottom of page