05.03.2014 Views

Highlights of 2011 - Institute for Policy Research - Northwestern ...

Highlights of 2011 - Institute for Policy Research - Northwestern ...

Highlights of 2011 - Institute for Policy Research - Northwestern ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tended to focus on differences in averages across groups.<br />

This is consistent with most demographic research, which<br />

has focused on rates rather than totals. Total numbers <strong>of</strong><br />

people with certain types <strong>of</strong> human capital are important <strong>for</strong><br />

U.S. competitiveness, however. Thus, Spencer is developing a<br />

new model that allows <strong>for</strong> aging and retirement, international<br />

movement, and potential policy effects <strong>of</strong> improved incentives<br />

<strong>for</strong> attracting and training students. Having a framework <strong>for</strong><br />

systematically organizing in<strong>for</strong>mation about human capital<br />

could help U.S. policymakers both in tracking progress and<br />

in developing strategies to increase particular kinds <strong>of</strong> human<br />

capital. Spencer also hopes the statistics will be useful in<br />

discussions about the future <strong>of</strong> U.S. higher education, and, by<br />

extension, K–12 and even preschool education.<br />

Reliable Covariate Measurement<br />

The effect <strong>of</strong> unreliability <strong>of</strong> measurement on propensity score<br />

adjusted treatment effects is a topic that has largely been<br />

unexamined. A study published in the Journal <strong>of</strong> Educational<br />

and Behavioral Statistics by IPR social psychologist Thomas D.<br />

Cook and his colleagues Peter Steiner <strong>of</strong> the University <strong>of</strong><br />

Wisconsin–Madison and William Shadish <strong>of</strong> the University <strong>of</strong><br />

Cali<strong>for</strong>nia, Merced presents results from their work simulating<br />

different degrees <strong>of</strong> unreliability in the multiple covariates that<br />

were used to estimate a propensity score. The simulation uses<br />

the same data as two prior studies where the researchers<br />

showed that a propensity score <strong>for</strong>med from many covariates<br />

demonstrably reduced selection bias. They also identified<br />

the subsets <strong>of</strong> covariates from the larger set that were most<br />

effective <strong>for</strong> bias reduction. Adding different degrees <strong>of</strong> random<br />

error to these covariates in a simulation, the researchers<br />

demonstrate that unreliability <strong>of</strong> measurement can degrade the<br />

ability <strong>of</strong> propensity scores to reduce bias. Specifically, increases<br />

in reliability only promote bias reduction if the covariates<br />

are effective in reducing bias to begin with. They found that<br />

increasing or decreasing the reliability <strong>of</strong> covariates that do not<br />

effectively reduce selection bias makes no difference at all.<br />

Random and Cut<strong>of</strong>f-Based Assignment<br />

Cook and IPR postdoctoral fellow Vivian Wong co-authored a<br />

study in Psychological Methods reviewing past studies comparing<br />

randomized experiments to regression discontinuity designs,<br />

which mostly found similar results—but with some significant<br />

exceptions. The authors argue that these exceptions might be<br />

due to potential confounds <strong>of</strong> study characteristics with assignment<br />

method or with failure to estimate the same parameter<br />

over methods. In their study, they correct the problems by randomly<br />

assigning 588 participants to be in a randomized experiment<br />

or a regression discontinuity design in which they are otherwise<br />

treated identically, comparing results estimating both the<br />

same and different parameters. Analysis includes parametric,<br />

semiparametric, and nonparametric methods <strong>of</strong> modeling<br />

nonlinearities. Results suggest that estimates from regression<br />

discontinuity designs approximate the results <strong>of</strong> randomized<br />

experiments reasonably well but also raise the issue <strong>of</strong> what<br />

constitutes agreement between the two estimates.<br />

Accounting <strong>for</strong> Missing Survey Data<br />

Missing data are prevalent in social science and health studies,<br />

both in the <strong>for</strong>m <strong>of</strong> attrition—in which responses “drop out”<br />

<strong>of</strong> the data set after a certain point—and in nonmonotone<br />

patterns <strong>of</strong> intermittently missing values. Yet even within these<br />

patterns, not all missing data can be treated equally; certain<br />

trends in missing data might indicate wider trends that should<br />

be taken into account when <strong>for</strong>ming conclusions about the data<br />

set as a whole. In an article published in Biometrics, marketing<br />

pr<strong>of</strong>essor and IPR associate Yi Qian, with Hua Yun Chen and<br />

Hui Xie <strong>of</strong> the University <strong>of</strong> Illinois at Chicago investigate the<br />

use <strong>of</strong> a generalized additive missing data model that, contrary<br />

to the existing literature, does not assume a restricted linear<br />

relationship between missing data and the potentially missing<br />

outcome. Using a bone fracture data set, they conduct an<br />

extensive simulation study. Their simulation shows that the<br />

proposed method helps reduce bias that might arise from the<br />

misspecification <strong>of</strong> the functional <strong>for</strong>ms <strong>of</strong> predictors in the<br />

missing data model.<br />

Learning Interventions <strong>Institute</strong><br />

Hedges co-led the <strong>2011</strong> American Society <strong>for</strong> Microbiology<br />

(ASM)/National <strong>Institute</strong> <strong>of</strong> General Medical Sciences<br />

Learning Interventions <strong>Institute</strong> on “Understanding <strong>Research</strong><br />

Techniques to Study Student Interventions,” held January 10–<br />

13 in Washington, D.C. The institute aims to introduce new<br />

behavioral and social science research methods that can be<br />

used to understand factors affecting student interest, motivation,<br />

and preparedness <strong>for</strong> research careers in science and medicine.<br />

The program used an intensive “learn, apply, and share” process<br />

<strong>of</strong> lectures and discussions, followed by small group work. All<br />

elements <strong>of</strong> the process are focused on how research can be<br />

used to learn what ef<strong>for</strong>ts drive and support academic success<br />

and commitment by students who are studying in science,<br />

technology, engineering, and mathematics (STEM) fields.<br />

Evaluating Fellowship Programs<br />

Evaluating the quality <strong>of</strong> researchers is a key component <strong>of</strong><br />

any strategy to improve the overall quality <strong>of</strong> research output.<br />

Hedges and IPR research associate Evelyn Asch were part <strong>of</strong><br />

the Spencer Foundation’s full-scale review <strong>of</strong> its two highly<br />

prestigious fellowship programs, designed to determine the<br />

programs’ effectiveness in helping fellows become stronger<br />

researchers than they would be otherwise. Hedges and Asch,<br />

along with graduate student Jennifer Hanis, completed evaluations<br />

<strong>of</strong> the Spencer Postdoctoral Fellowship program and<br />

52 INSTITUTE FOR POLICY RESEARCH

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!