Home

I am an Assistant Professor at the Department of Methodology and Statistics at the School of Social and Behavioral Sciences of Tilburg University, the Netherlands.

My research focuses on meta-science, including topics such as replication, publication bias, statistical errors, and questionable research practices. I am currently interested in the idea of looking at details (e.g., statistical reporting errors) to uncover bigger problems (e.g., the overall robustness of a conclusion). I received an NWO Veni grant to further expand this idea.

I am part of the Meta-Research Center at Tilburg University: http://metaresearch.nl.

Contact
Email: m.b.nuijten@uvt.nl
Work phone: (+31) (0) 13 466 2053

linkedintwitterround-google-sholar

github

New Preprint: The Effects of p-Hacking and Publication Bias

We know that p-hacking and publication bias are bad. But how bad? And in what ways could they affect our conclusions?

Proud to report that my PhD student Esther Maassen published a new preprint: “The Impact of Publication Bias and Single and Combined p-Hacking Practices on Effect Size and Heterogeneity Estimates in Meta-Analysis“.

It’s a mouthful, but this nicely reflects the level of nuance in her conclusions. Some key findings include:

  • Publication bias is very bad for effect size estimation
  • Not all p-hacking strategies are equally detrimental to effect size estimation. For instance, even though optional stopping may be terrible for your Type I Error rate, it does not add much bias to effect size estimation. Selective outcome reporting and optional dropping of specific types of participants, on the other hand, are really really bad.
  • Heterogeneity was also impacted by p-hacking, but sometimes in surprising ways. Turns out: heterogeneity is a complex concept!

Her work includes a custom Shiny app, where users can see the impact of publication bias and p-hacking in their own scenarios: https://emaassen.shinyapps.io/phacking/.

Take-away: we need systemic change to promote open and robust scientific practices that avoid publication bias and p-hacking.

Blog: Should meta-scientists hold themselves to higher standards?

If your job effectively consists of telling other researchers how to do their job; what happens to your credibility if you drop the ball in your own research? What if you don’t always practice what you preach, or if you make mistakes?

For the monthly Meta-Research Center blog, I wrote about these dilemmas. Should meta-researchers be held to higher standards to be taken seriously? In the end, I concluded that good science isn’t about being perfect, it’s about being transparent, adaptable, and striving to do better.

Read the full blog here: Should meta-scientists hold themselves to higher standards? — Meta-Research Center

Welcoming Cas & Dennis to the Team!

I’m very pleased to announce that Cas Goos and Dennis Peng are joining the Meta-Science team under my supervision.

Cas will work on improving statistical reproducibility in psychology: what are journals doing already? And is it working? This is a project together with Jelte Wicherts.

Dennis will look at the statistical validity of intervention studies in clinical psychology. What different analyses are used? Is there room for opportunistic use of degrees of freedom in these analyses? How can it be better? This is a project together with Paul Lodder and Jelte Wicherts.

I’m excited to see the progress on both projects. Welcome to the team!

Looking for a PhD Candidate

The Meta-Research Center is looking for a new PhD candidate!

In this project, you will look at the statistical validity of psychological intervention studies. You will work under direct supervision of Dr. Paul Lodder, Prof. Jelte Wicherts, and myself.

The position is perfect for a student interested in clinical psychology, statistics, methodology, and meta-research.

Details about the position and application can be found here: https://tiu.nu/21539.

New Preprint on Reporting Errors in COVID-19 Research (a Registered Report)

The COVID-19 outbreak has led to an exponential increase of publications and preprints about the virus, its causes, consequences, and possible cures. COVID-19 research has been conducted under high time pressure and has been subject to financial and societal interests. Doing research under such pressure may influence the scrutiny with which researchers perform and write up their studies. Either researchers become more diligent, because of the high-stakes nature of the research, or the time pressure may lead to cutting corners and lower quality output.

In this study, we conducted a natural experiment to compare the prevalence of incorrectly reported statistics in a stratified random sample of COVID-19 preprints and a matched sample of non-COVID-19 preprints.

Our results show that the overall prevalence of incorrectly reported statistics is 9-10%, but frequentist as well as Bayesian hypothesis tests show no difference in the number of statistical inconsistencies between COVID-19 and non-COVID-19 preprints.

Taken together with previous research, our results suggest that the danger of hastily conducting and writing up research lies primarily in the risk of conducting methodologically inferior studies, and perhaps not in the statistical reporting quality.

You can find the full preprint here: https://psyarxiv.com/asbfd/.

New Preprint: Using Statcheck in Peer Review May Reduce Errors

We investigated whether statistical reporting inconsistencies could be avoided if journals implement the tool statcheck in the peer review process.

In a preregistered study covering over 7000 articles, we compared the inconsistency rates between two journals that implemented statcheck in their peer review process (Psychological Science and Journal of Experimental and Social Psychology) with two matched control journals (Journal of Experimental Psychology: General and Journal of Personality and Social Psychology, respectively), before and after statcheck was implemented.

Preregistered multilevel logistic regression analyses showed that the decrease in both inconsistencies and decision inconsistencies around p = .05 is considerably steeper in statcheck journals than in control journals, offering support for the notion that statcheck can be a useful tool for journals to avoid statistical reporting inconsistencies in published articles.

You can find the full preprint here: https://psyarxiv.com/bxau9

Young eScientist Award

In December 2020, Willem Sleegers and I were awarded the Young eScientist Award from the Netherlands eScience Center for our proposal to improve statcheck’s searching algorithm. Today marks the start of our collaboration with the eScience Center and we are very excited to get started!

In this project, we plan to extend statcheck’s search algorithm with natural language processing algorithms, in order to recognize more statistics than just the ones reported perfectly in APA style (a current restriction). We hope that this extension will expand statcheck’s functionality beyond psychology, so that statistical errors in, e.g., biomedical and economics papers can also be detected and corrected.

More information about the award can be found here.

Image result for escience center netherlands

Interview Recognition and Rewards

Recently, academic organisations in the Netherlands have been discussing how we can improve the system of Recognition and Rewards for scientists. In a short interview for Tilburg University, I explain my hope that rewarding Open Science can benefit both science and scientists.