Credible Management Research Newsletter - March 2025
Welcome to the eighth edition of the Credible Management Research Newsletter. Each month, we will bring you interesting new thoughts, papers, and something for the eyes and ears (video/podcast) to help you get on track with the latest developments in improving science. This newsletter is written by Jost (disclaimer: he used ChatGPT for copy-editing). So, if you have any questions or comments, please contact him.
What to think about? 🧠
We covered replications in the second edition of our newsletter. Today, I’d like to add a bit more on this topic because of a potential scandal in developmental economics. In February 2025, the Institute for Replication reported on X that it had been alerted to possible misconduct related to data collected by an NGO, which conducted randomized experiments for researchers that led to several publications in leading economic journals. However, some researchers examined the data and code (note: most top-tier journals in economics require authors to upload data and code) and identified multiple issues—including implausible randomization (e.g., Kjelsrud et al., 2025). Currently, there’s a back-and-forth between the replicators and the authors of the original papers. It will be interesting to see how things develop in the coming months. The replication team has created an OSF page that offers an excellent overview of the potentially affected papers and recent developments.
No matter how this scandal may further unfold, there are already a few lessons we, as management researchers, can learn: (1) Trust is good, control is better—especially in the data collection process. (2) Small interventions (e.g., two hours of phone counseling) seldom have large effects that persist over time. (3) Independent replication requires access to data and code. Recent policy changes at Management Science can serve as a model for other journals, with a recent study (Fisar et al., 2024) showing how effective these policies can be.
What are your takeaways—and what can we, as a field, do better? Feel free to share your thoughts in the comments or DM us.
What to read? 📚
Imagine giving 146 teams the same ingredients and asking them to cook the same meal. Most of us would expect the dishes to taste similar. But do they? Huntington-Klein and colleagues explored this question—not in a kitchen, but in economics. Specifically, they asked how researcher decisions related to data cleaning, research design, and interpretation of a policy question affect variation in estimated treatment effects. To test this, they designed a three-stage experiment involving 146 research teams, all analyzing the same dataset to answer the same policy question. But the conditions changed in each stage:
Stage 1 – Freedom of Choice
Teams had few constraints. They could clean the data as they saw fit, choose their own analytic methods, and interpret the policy question however they liked. The result? A wide range of treatment effects. Sample sizes also varied wildly—from about 60,000 to more than 350,000 observations!Stage 2 – Shared Recipe
Teams were given a shared research design but could still clean the data themselves. Surprisingly, treatment effect variation increased slightly, largely due to imperfect adherence to the shared design. Yet, sample sizes began to converge.Stage 3 – Full Standardization
Teams received pre-cleaned data and the shared analysis plan. Variation shrank considerably, and nearly all teams used the same sample size.
My key take-aways from this study: (1) Even highly skilled researchers analyzing the same data can reach very different conclusions. (2) There’s a lot of researcher degrees of freedom that significantly impact results. (3) We often focus heavily on research design, but data cleaning might deserve equal attention—and a bit more standardization.
Bon appétit!
What to hear? 🎧
The Institute for Replication also hosts a podcast called “Allegedly Does Not Replicate.” I recently started listening and can highly recommend the conversations.
That’s it for this month! Feel free to share interesting papers or developments around management credibility with us—we’d love to hear from you!