In Experiment 1, we hypothesized that participants would: (i) reward individuals they thought were less overweight/obese than normal-weight individuals and (ii) punish individuals they believed were overweight/obese more than individuals with a normal weight. † The research design and analysis plan were pre-registered in the Open Science Framework prior to data collection [https://osf.io/pqzgs]† (Note, our registration documents stated that our analyzes would be performed in SPSS, but we chose to run them in R because it is free and open-source, allowing for greater reproducibility.).
One hundred and thirty-four participants took part in an online survey; of these, 121 completed the study (95 women and 26 men; 87.6% between the ages of 18 and 25). The participants were largely recruited from the student and staff population of the University of Liverpool, UK. Inclusion criteria were age 18+ years, and individuals with a current or previous diagnosis of a psychiatric disorder were excluded. The study was approved by the local Research Ethics Committee (ref: 5516). Our a priori power calculation estimated 139 participants (one-tailed: within subjects ttest) would be needed to detect dz= 0.25 at 90% power. However, due to time constraints, we were only able to recruit 121 participants as this was a student-led project. Note that if we used more conventional statistical power (80%), we would have had sufficient statistical power to detect the desired effects with this sample size (N= 101).
New Financial Discrimination Task: Participants were shown a landing page informing them that they would monitor the performance of individuals enrolled in a course to improve their cognitive skills (we called “Psy-Learn”), which was our cover. They were specifically informed that the study tested whether administering small financial rewards and punishments improved cognitive learning. The task then instructed them to observe individuals’ performance on various cognitive tests and award a small financial reward or punishment based on their performance on each test. They were also told that they would also provide an overall assessment of the individual’s suitability for advancement on the course.
The participants then provided consent and their basic demographic information. After this, they were presented with a mock screen designed to simulate Psy-Learn searches and connect with a suitable learner. Once connected, the participant saw the learner’s profile with answers to the same demographic questions they were asked for 60 seconds. They then observed the student’s performance on six cognitive tests. These tests included an accelerated reaction time (reacting to an arrow appearing on the screen in <500 ms), solving a 7-letter anagram, and a 7-word short-term memory test. Sample proofs and screenshots are shown in additional materials† Importantly, the participant was informed after each trial whether the learner was correct or incorrect. If the student got it right, the participant got the message ‘The student was RIGHT! How much are you going to REWARD them?’ and if the student was incorrect, the participant was told ‘The student was WRONG! How much are you going to AIM them?’. They were then given a sliding scale from 0 to 100 pence. The task was pseudo-randomized in that each learner got three trials correct and three trials incorrect, therefore our main dependent variables were the total reward and punishments assigned to the learner (0-300 pence). After giving a reward or punishment for the last task, participants were given the option ‘Overall, would you recommend that the participant proceed to the next stage of Psy-Learn in the future? They could make more money in future sessions. [YES, NO]’ and then asked them to provide feedback on their performance via an open text box. They were then asked three memory questions based on the student’s demographic information, with the core question being ‘What was the student’s weight? [Underweight, Average Weight, Overweight / Obese, Preferred not to say]† This served as a manipulation check to ensure that the participant paid attention to the information.
After this, the participants were returned to the sham screen to connect with a second student. Again, they saw the demographic information of the second student for 60 s. All demographic information except weight status was held constant between the first and second student. Then the participants observed and gave reward and punishment on the same six tasks, with the second student getting three correct and three incorrect, but their performance was different from the first (e.g. different answers were given and they were given different correct/false questions). ) to ensure that the manipulation was not obvious. They were asked if the participant needed to make progress in the course and if they had feedback on their performance. Finally, their memory of the second pupil’s demographic information was assessed, especially the memory of the pupil’s weight. This then marked the end of the investigation; participants were questioned and the cover story explained. In both studies, reward and punishment behavior of the participants in individual trials showed acceptable internal consistency (ωs > 0.81: see additional online materials†
Participants clicked on a hyperlink to the study (hosted by Inquisit Web v.5, Millisecond, Seattle). They observed the cover story (information about Psy-Learn) and then gave informed consent. They were then asked to provide some categorical demographic information. They then saw a student’s profile and completed the new discrimination tasks, before remembering demographic information about the student. After this, they observed, rewarded and punished the second student, before being asked to recall the demographic information. The weight status of the students (normal weight vs. overweight/obesity) was randomly compensated. Within the participants, however, the remaining demographics of the students were closely matched (eg age was within ±2 years, gender was the same) to reduce the likelihood of discrimination on other demographics. To increase the credibility of the cover story, if participants clicked on the link outside the hours of 10am to 10pm, they received a message saying ‘None of our students are online now, please try again later’. The experiment lasted about 15 minutes.
Data reduction and analysis
Our main dependent variables were total reward and punishment [0–300 pence] separated for pupils with a ‘normal/average’ weight and ‘overweight/obese’. We performed paired samples ttests separately for reward and punishment. We also examined these comparisons in participants who passed the manipulation check to recall the learner’s weight status. One hundred and eleven participants (91.7%) correctly remembered the weight status of the normal weight students and one hundred and fourteen (94.2%) participants correctly remembered the weight status of the overweight/obese students. In total, 106 (87.6%) remembered both. There was one outlier in penalty scores (see online additional materials), but removing this outlier or recoding the outlier to the nearest non-outlier value (300 > 280) did not materially affect the results (ps < 0.05). Therefore, analyzes are presented with the outlier included. We performed McNemar's test to investigate whether the decision to continue differed significantly between the conditions. In Supplementary Material, we examined the magnitude of reward or punishment in normal-weight participants only as an exploratory analysis. Data and analysis code is available on OSF, as well as a sample script for the experimental paradigm [https://osf.io/p2mtz/]†
A complete overview of participant demographics is presented in Table 1† The sample consisted largely of young, female students with a higher education level and a self-reported ‘average’ body weight.
There was no significant difference in reward between normal weight (mean = 199.84, SD = 72.64) and overweight/obese condition (mean = 205.34, SD = 73.58: t(120) = 1.25, p= 0.214, G††0.07 [95% CI: −0.19 to 0.04]† When removing individuals who no longer remembered the student’s weight status (normal weight: mean = 204.64, SD = 73.19; overweight/obesity: mean = 208.85, SD = 74.53), the effect remained not significant ( t(105) = 0.91, p= 0.367, G† †0.06 [95% CI: −0.18 to 0.07]†
There was a significant difference in punishment between normal weight (mean = 96.79, SD = 66.31) and overweight/obesity (mean = 107.50, SD = 71.06: t(120) = 2.28,p= 0.025, G† †0.15 [95% CI: −0.29 to −0.02]), in which participants punished overweight/obese students more than normal-weight students. When removing subjects who no longer remembered the pupil’s weight status, the effect remained significant (t(105) = 2.21,p= 0.029, see fig. 1†
Decisions to help students progress
Participants indicated that normal/average weight students achieved 75/106 (70.7%) of the time, and overweight/obese students achieved 66/106 (60.2%) of the time. McNemar’s test was significant (X2(1) = 10.05,p= 0.002) demonstrating a significant difference in pass scores: participants were more likely to recommend that normal-weight individuals move to the next stage than overweight/obese individuals, despite both getting the same number of tasks right and wrong.
In our new experimental paradigm, participants gave more punishment to individuals they believed to be overweight/obese compared to normal-weight individuals and who were less likely to recommend overweight/obese individuals to move on to the next phase. This observation supports previous studies showing that students willingly punish individuals financially , when the punishment has no direct benefit (known as altruistic punishment). Here this can be compounded by a greater willingness to punish an ‘outgroup’ , as most of our sample self-identified as average weight. However, a limitation of this study was the homogeneous sample of young, female students. Given the majority of the student sample and the design within the participants, there may be demand characteristics † As such, we wanted to mitigate this by using a between-subject design in Experiment 2.