How Current Research Practices Distort What We Know about Psychology? Part 2: The Solutions

(…Continued from Part 1)

So what can people do about it? Due to how these system and practice can significantly distort the science of psychology (and many other fields), many have come up solutions to overcome the issues. I will describe some of the solutions here with my personal opinions, from the individual level (the researchers) to the bigger picture (the publication).

Adapted from: http://www.ox.ac.uk/

Adapted from: http://www.ox.ac.uk/

Researchers’ Statistical Knowledge

The researchers themselves should rethink and revise what they know about statistics. Most psychologists often have their own research area in psychology, but are not always specifically trained in statistics, especially if quantitative analysis is not part of their interest. Don’t get me wrong, it does not necessary mean our lecturers are bad at statistics, and some of them actually have excellent knowledge that I also learned a lot from them. Still, for many of them, statistics is just a tool to analyze the data, and their statistical knowledge is good enough to make sense of it.

Adapted from: https://aos.iacpublishinglabs.com/

Adapted from: https://aos.iacpublishinglabs.com/

But now they should ask the question: Is their statistical knowledge good enough for now? For example, null hypothesis significance testing (NHST), or the use of p-value, has been the measure for a long time to test if the result is due to random chance, and almost all psychologists have been trained to use it as the sole indicator. However, many psychology students and researchers do not know the limitation of p-values, and are susceptible to the misconceptions.

Adapted from: https://i.ytimg.com/

Adapted from: https://i.ytimg.com/

There are misconceptions of NHST and p-value, and I am not going to list all of them here for the obvious reason (you can see Goodman’s article if interested, as well as the classics by Cohen, and I strongly recommend them even the stuff can be technical), but I would still like to point out one of the most common misinterpretations, that is p-value only shows statistical significance (i.e. not due to random chance) but not practical or clinical significance. I actually do not like the sole use of p-value for this reason, but I do not wish to ban the use (although it has been suggested). Instead, I believe the problem is in interpretation and understanding, that everyone should acknowledge the limitation of NHST, and overcome it with additional measure (such as effect size).

Adapted from: http://blog.f1000research.com/

Adapted from: http://blog.f1000research.com/

Guidelines for Authors and Reviewers

There should be guidelines for scientific publication to both authors and reviewers which emphasize research transparency. These guidelines were already proposed by Simmons and others (2011) after describing the researchers degree of freedom.

As mentioned, the researchers are assumed to be honest and have no malicious intention, but in order to let the reviewers and readers to make informed decisions about the credibility of the study, the authors should describe their methodology and finding in a transparent manner, such as listing all variables and conditions in the study (even the failed or non-significant ones).

Adapted from: http://www.stepbystep.com/

Adapted from: http://www.stepbystep.com/


For reviewers, they should be more tolerant of the imperfection in results, and prioritize the transparency of the study and focus on the quality of methodology and interpretation, rather than the tidiness or significance of findings.

Both authors and reviewers should make sure they follow the guidelines, or else they have to justify it. If the justification is not compelling, it is necessary to conduct an exact replication.

Digital Publication

We should also reconsider how the scientific papers are published. Because of the limited publication slots, the reviewers can be biased, and more likely to select the manuscripts that support the modern theories. Moreover, the process of reviewing can be slow, and I mean real slow… it can take months, and sometimes even years before the manuscripts could be published. On top of that, the growth of the academic field results in the increased number of manuscripts to be reviewed, prolonging the process.

Adapted from: http://blog.scielo.org/

Adapted from: http://blog.scielo.org/

The solution, which has also already employed, is digital publication suggested by Young and others (2008) to overcome the problem of limited slots. Being in the digital era, we can share information more easily, and ideally all journals should have some form of electronic publication platform for received and reviewed manuscripts that are not the priority in printed publication.

Without the limit of publication slots, everyone can access to more studies and findings that still potentially contribute to the knowledge. This publication can also adopt the approach of social media, where authors can post the manuscript to the public domain, and reviewed by other researchers in the related field to shorten the review process.

Adapted from: https://papersailboatdesign.files.wordpress.com/

Adapted from: https://papersailboatdesign.files.wordpress.com/

To encourage them to review more manuscripts, credit can be given to good reviewers. However, it should be noted the review will not be less bias, and sometimes it can be even more bias and extreme when they are examined by groups of people. Nonetheless, a group should provide a wider perspective than a single reviewer.

Final Words

So what now? If you are a student, what can you do as a science consumer? Honestly, we cannot significantly change the practice and system. Despite being concerned for a long time, it has been rooted in how people do and publish research, and it cannot be changed in one day. However, the information here is not useless to you (I have spent lots of time writing this and you have read till here, it’s better worth something). Now you know the system is not perfect, and it is not that difficult to obtain positive results as you might think, you should be more critical of the findings (I know I have mentioned this a lot in other articles, but this is crucial).

Adapted from: http://imgc-cn.artprintimages.com/

Adapted from: http://imgc-cn.artprintimages.com/

Science is equal to everyone, and it is not more right from authority. Just because the study is published by famous scientists and in prestigious journals, does not mean it is good and you can follow blindly. Instead, you should not be afraid to challenge the study and theory. It is alright to apply your own idea and interpretation (which we are all supposed to learn in university), and contribute to what we know about science and psychology with a different perspective.

 

References and Readings

Baumeister, R. F., Bratslavsky, E. Myraven, M., & Tice, D. M. (1998). Ego depletion: Is the active self a limited resource? Journal of Personality and Social Psychology, 74(5), 1252-1265. doi: 10.1037/0022-3514.74.5.1252

Carter, E. C., & McCullough, M. E. (2014). Publication bias and the limited strength model of self-control: Has the evidence for ego depletion been overestimated? Frontiers in Psychology, 5(823), 1-11. doi: 10.3389/fpsyg.2014.00823

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304-1312. doi: 10.1037/0003-066X.45.12.1304

Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997-1003. doi: 10.1037/0003-066X.49.12.997

Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLoS ONE, 5(4), e10068. doi: 10.1371/journal.pone.0010068

Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. D. (2010). Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin, 136(4), 495-525. doi: 10.1037/a0019486

Pocheptsova, A., Amir, O., Dhar, R., & Baumeister, R. F. (2009). Deciding without resources: Resource depletion and choice in context. Journal of Marketing Research, 46(3), 344-355. doi: 10.1509/jmkr.46.3.344

Richeson, J. A., & Shelton, J. N. (2003). When prejudice does not pay: Effects of interracial contact on executive function. Psychological Science, 14(3), 287-290. doi: 10.1111/1467-9280.03437

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366. doi: 10.1177/0956797611417632

Vohs, K. D., & Heatherton, T. F. (2000). Self-regulatory failure: A resource-depletion approach. Psychological Science, 11(3), 249-254. doi: 10.1111/1467-9280.00250

Young, N. S., Ioannidis, J. P. A., & Al-Ubaydli, O. (2008). Why current publication practices may distort science. PLoS Medicine, 5(10), e201. doi: 10.1371/journal.pmed.0050201

Goodman, S. (2008). A dirty dozen: Twelve P-value misconceptions. Seminars in Hematology, 45(3), 135-140. doi: 10.1053/j.seminhematol.2008.04.003

Posted in Psychology articles.

Jordan Oh (Veng Thang). I was a student from HELP university, and currently study 3rd year psychology in ANU. I studied and have the experience in Education (Teaching Chinese as Second Language), and was a member of Peer Mentors and PAL (Peer Assisted Learning) tutor in quantitative research and cognitive psychology when I was in HELP. My interest is in soft science like statistics and psychology, that’s why I love what I'm doing.

2 Comments

  1. Thank you so much! Helps put things into perspective for students, and reminds us as scientists what science is really about; curiosity, systematic reasoning and accountability. Have shared with my networks. I really like your style of writing btw, am going to practice some of that structure ~:-)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.