If you study or at least read about psychology, you might have heard of this popular idea that our self-control has limited resources, and once depleted, it is hard to resist temptation and regulate one’s self afterwards. This phenomenon, called ego depletion, was coined and studied by Roy Baumeister, which based on the theory by our beloved Freud. According to Freud, Ego is the underlying psychological entity to regulate our innate impulses and reality, but it can get fatigue after the effort of managing one’s self.
To test the hypothesis, Baumeister and his colleagues did a classical experiment in 1998, where they set up two dishes on a table, one with chocolate chip cookies and the other with radishes, and subjects were assigned to only eat one of the food but not the other [Note 1]. After that, they were asked to complete a puzzle, but unknowingly to them, it was designed to be unsolvable, and the experimenters measured how long they stayed on the puzzle and the number of attempts. Consistent with the hypothesis of ego depletion, they found subjects who ate the radishes spent less time and attempts on the puzzle, and concluded after resisting the temptation to eat the cookies, they used up the resources of willpower, hence less energy left to do the puzzle.
Since then, the study has been widely cited and applied, and ego depletion is even considered to be a subfield of psychology. The selling point of this theory is its flexibility that has a wide range of practical applications. For example, the effect showed prejudiced people exert more effort to interact with different race (Richeson & Shelton, 2003), dieters eat more ice-cream after inhibiting their emotional response (Vohs & Heatherton, 2000), and consumers tend to make more intuitive and effortless buying decision after self-depletion (Pocheptsova, Amir, Dhar, & Baumeister, 2009). Even the meta-analysis with 83 studies and 198 independent analyses, done by Hagger and colleagues in 2010, showed the effect of ego depletion is significant and moderate.
But what if I tell you, that ego depletion may not exist, or at least it is not as strong as people think? What… How can it be? Recently Evan Carter and Michael McCullough did an experiment based on ego depletion, but failed to show the evidence of this effect. In response, they consulted Hagger and others’ meta-analysis, but instead, they found the finding about ego depletion could have been biased. It turned out the meta-analysis only included published studies that biased towards positive results, and did not correct for small-study effect (i.e. the tendency for a smaller study to show larger effect), thus overestimating the effect size. By using a newer method in another meta-analysis (2014), Carter and McCullough did not find the significant effect of ego depletion in the literature, and even suggested the publication bias in the previous analysis.
The subject matter was already discussed in depth by a blog (“Everything is crumbling” by Daniel Engber) and YouTube video (“Why an entire field of psychology is in trouble” by SciShow) [Note 2]. It may be hindsight, but I know this would happen sooner or later because of how people publish and report the studies. However, I do not intend to argue that ego depletion is a fraud, after all, it has been widely tested and replicated, and the method by Carter and McCullough was still considered premature (Even in the article, they explicitly stated they did not wish to suggest ego depletion is not real, and their method was still relatively new). The problem might be the phenomenon may not be as robust as people expected due to the inflating reported effect. Nevertheless, the issue of current publication and research still requires the immediate attention in the scientific field.
The Problem of Current Publication System
Since I was a child, I wanted to be a scientist so I could find out the answers I have always been asking. Initially I thought the process was simple and straightforward, and ideally it should be: All I have to do is to ask a question, form a hypothesis, collect and analyze the data, then report it.
But it is not that easy in real life, when many other factors are involved. Young, Ioannidis, and Al-Ubaydli (2008) wrote an excellent article about the economic model in science. In general, they described the scientific publication as the process of transferring product (research result and knowledge) from the producers (scientists) to consumer (general public), like how you get the groceries from the factory. But before the product can be consumed, they need to go through a medium or gate-keeper, like a supermarket between you and the factory, or in the case of scientific publication – journals and reviewers.
However, the current system is dominated by few high-impact journals, and they are the ones determine the science people can see. I hate to say this, but the rationale for publication is sometimes independent of the quality of manuscripts, and what will be published can be due to the outside factors, such as politics and popular view or theory in the field.
But what causes the bias? It is not often due to the personal favoritism, rather the result of rational choice. A journal receives thousands of manuscripts but with limited publication slots, so the reviewers are highly selective and choose the studies with significant and novel findings to publish. Therefore, they are reluctant to select null results, replication, and ironically, the replication with the null finding. Because they do not know the true relationship (well no one knows), the publication of mainly positive results would inflate the overall reported relationship.
What Will the Researchers Do?
But this is the problem of publishing articles, and you may say if there is really no relationship or effect, the researchers will not find anything in the first place, right? WRONG! They have many incentives (e.g. funding or career building) to publish in prestigious journals, and to do this, they would use any strategies to maximize the probability of successful publication, including getting significant and positive results that support the dominant theory. Although this is the problem in the general scientific field, it is more severe in “softer” science such as psychology due to the relatively subjective nature, which was shown by the higher rate of positive findings in the literature (Fanelli, 2010).
If you just learned about statistics, you might think the process of analysis is robust, and it only shows what is really there. But in reality, it is unacceptably easy to get a statistically significant result, as Ronald Coase said, “If you torture the data long enough it will confess” [Note 3]. Skilled researchers often engage in the exploratory practice to interpret the data set. Notably, Simmons, Nelson, and Simonsohn (2011) outlined the researchers degree of freedom, that is the things that researchers can do to “play around” the data, which include but not limited to choosing outcomes variables, increasing sample size, adding controlling variables, or reporting only some of the conditions or treatments (Why am I telling you this?).
These practices, called p-hacking, are the ways they can examine and explore the data set until something is found. However, the researchers do not necessary have the malicious intention to publish the faulty result, rather these behaviours are the product of the ambiguous data and the desire to obtain statistical significant findings. Nevertheless, these degree of freedom causes the high rate of false-positive (more than the conventional 5%), as the simulation by Simmons and others (2011) showed the combination of the practices can result in the false-positive rate up to 61%.
(To be Continued…with solutions)
- Interesting to mention that participants in the radishes condition did show the apparent interest in the cookies, and some of them even picked up and sniff at them, though never bit the wrong food.
- Freud was NOT the father of modern psychology, but psycho-analysis. In fact, Wilhelm Wundt was the father of modern experimental psychology.
- Quoted from “A Comment on Daniel Klein’s ‘A Plea to Economists Who Favor Liberty'” by Gordon Tullock, who noted to hear Coase said this several times, but has never published it as he knew at the time.