Postings on science, wine, and the mind, among other things.
Psychology is currently undergoing a reformation. Starting around 2011 - coincidentally just as I started grad school - a host of nascent concerns about standard practices in the field coalesced into a reform movement. This movement, often referred to as the "Open Science" movement (though openness - i.e., freely sharing data, code, and papers - is not its only goal) has continued to gather steam in the years since. The Twitter-mediated data-dive I took into the movement back in early 2016 now looks almost quaint, in aftermath of two years of rapid progress and controversy. And there has definitely been both: progress can be seen through the flowering of online data and code repositories, like the Open Science Framework, the birth of numerous preprint servers imitating the ArXiv, the increasing prevalence of preregistration and journals accepting registered reports, and improvements in statistical standards. Many researchers across the field agree that at least some of these goals are laudable. However, progress has been accompanied by much controversy, particularly over the interpretation of direct replications, and the language and tone used in the debate.
I do not intend to recapitulate the history of the open science movement here. However, I set the scene above because it was against this backdrop that I initiated development of MySocialBrain.org in 2014. A major criticism of standard psychological methods - even before the current reform movement, but particularly since - is the nature of the samples we tend to use. Two features of these samples detract from the conclusions that we can draw from much research: first, the samples are simply too small. Numerous meta-scientific investigations - both published and informal - have concluded that many past psychological studies were underpowered. This means that the samples were often too small to detect effects, even if they are really out there in the population. Second, the samples do not represent the population. Most psychological studies use convenience samples, meaning they are comprised largely of undergraduates at research universities. However, these students differ from the general population of the country along many dimensions, and are even less representative of humanity globally. Studies conducted solely on such samples are thus limited in terms of their generalizability.
I conceived of MySocialBrain.org as a way to help free my research from these limitations. The internet provides modern psychological researchers with unprecedented opportunities. The accessibility of websites to a large proportion of the global population means that a researcher can hope for much more diverse, representative samples than could be recruited for the average lab study. Moreover, by using self-discovery instead of cash to motivate volunteer participants, the cost of research goes down, and sample sizes go up. People love to learn new things about themselves, even when the things are as silly as a Buzzfeed quiz. As researcher we can harness this natural human drive for the benefit of psychological science. Thus, by turning to the web and offering participants detailed, personalized visual feedback, we might dramatically improve sample characteristics.
In March of 2015, while still a graduate student at Harvard, I launched the first public version of MySocialBrain.org. The site had been under development for most of the preceding academic year, but was still far from the vision I had for it. Outside events conspired to rush us to an early release: Facebook, whose API we were using for our inaugural study, would be changing the data we (and other app developers) could access, starting in May 2015. One of the changes would have made it impossible to conduct the study we had planned, and thus we hurried to get the site ready in time. Despite a few hiccups, the launch went about as well as could have been expected, and we accrued a substantial sample in the inaugural study before Facebook's technical changes made it impossible to continue. This was largely thanks to the efforts of Enrique Dominguez-Meneses, then a talented undergraduate in the lab. Enrique was responsible for virtually all of the programming in this version of the site, as at the time I was only dipping my toes into the ocean of web development.
However, over time I chipped away at the project. With the support of my graduate advisor, Jason Mitchell, and subsequently my postdoc advisor, Diana Tamir, I eventually brought the site to its present state. Ultimately, the changes proved quite extensive. A few highlights:
In the interim, although the site has been available, I have not actively promoted it. Now, however, I have finally built up a critical mass of studies which I hope will keep participants entertained, engaged, and enthusiastic about the site. The next step will be to use a variety of tools - including social media - to promote and popularize MySocialBrain.org. With so many other attractions on the web, building up a high volume of web traffic will likely be a challenge in itself, albeit a very different one from coding the site. Fortunately, I have some experience with this type of public outreach in the form of this blog. Moreover, I know the task is possible, thanks in large part to the previous generation of research websites like Project Implicit and Test My Brain.
The existence of these older sites raise an important question: what makes MySocialBrain.org different, other than the obvious facts that I maintain it, and it features my studies? There are indeed many similarities between the sites. For example, they all help to recruit larger and more diverse samples than those used in typical psychology studies. What makes MySocialBrain.org different as a platform is that through it, we a really trying to take full advantage of the possibilities of the web. Most existing psychology websites are essentially platforms for running typical lab studies online: the same exact task could be done locally on a single (disconnected) computer in the office next door to the researcher. Thus, these sites benefit from the larger, more diverse samples available on the web, but not from other the other possibilities it offers.
On MySocialBrain.org, we hope to exploit the unique potentialities of the web in several ways. One of these has already been implemented, in the form of our user account system. By making an account, participants consent to us linking their results across multiple studies. Thus, if one day they complete the Interpersonal Reactivity Index (a measure of social cognition), and the next day they log back on and complete a mental state judgement task, we can correlate their results. This feature gives MySocialBrain.org the potential to be a powerful engine for individual difference research. "Individual difference" is the name psychologists give to research which examines the associations between people's traits (broadly defined). This type of research tries to explain how one person is psychologically different from or similar to from another, in terms of personality, cognitive abilities, or attitudes and preferences. Such research requires particularly large samples, making it a natural fit for an online platform. Through MySocialBrain.org we thus hope to answer many questions about individual differences in social ability and preference.
Another way in which MySocialBrain.org takes advantage of the web is through interactions with other websites. Many modern websites and apps are purposely built to communicate with each other using Application Programming Interfaces (APIs). APIs allow sites and services on the internet to talk to each other programmatically, without humans having to manually control the process. As I write this, there are nearly twenty thousand APIs listed on ProgrammableWeb, providing access to data on everything from art museum listings, to social media accounts, to beverage sales. Integrating these APIs into experiments could allow for the creation of highly naturalistic, dynamic studies. The inaugural study we conducted on MySocialBrain.org was a simple example of how this could work: with participants' permission, we scraped their Facebook data, and built their "ego-net" (i.e., the social network between all of their immediate friends). We then presented participants with pairs of their friends (in the form of both profile pictures and names taken from Facebook - for privacy we did not store these on our server, but only used them locally). For each pairing, participants judged whether two of their friends were friends with each other. We then compared these judgements with the ground truth from Facebook to calculate objective accuracy (which, as it happens, was quite good).
Although changes to the Facebook API mean that this particular study is no longer possible to continue, the broader possibilities for integrating APIs with research are fantastic. For example, participants' IP address could be used to determine their rough geographic location, and then the experiment could make an API call to a weather service to find out the location conditions. Elements of narratives in the subsequent experiment could then be tweaked to match participants' situation (e.g., dark and stormy night), increasing their psychological impact. Alternatively, imagine pulling real opinion pieces from newspaper APIs, and real comments from social media like reddit, to serve to participants in a study on political attitudes. Such uses only scratch the surface of what could be accomplished with APIs. Naturally new challenges will accompany these opportunities, particularly related to participant privacy and data security, but we are confident that these problems can be solved.
Next, we plan to borrow from standard practices in industry to improve the efficiency of our experimental designs. In particular, we plan to use adaptive design optimization (ADO) to get the most out of our questions and tasks. For those unfamiliar, you can think of ADO a bit like playing a game of 20 questions. Every time you ask a question, you can use the answer to craft the best possible next question. For example, your first question could be "Is it bigger than a bread box?" If the answer is "yes" then it becomes pointless to ask questions which would only help to distinguish between different small objects. Right now, the way most psychology experiments are designed is analogous to coming up with a list of 19 questions at the beginning of the game, and then asking them all in sequence without taking advantage of anything you learn until the end. You can see how inefficient this is! (For a more formal introduction to ADO, see this excellent tutorial).
ADO can be used in any experiment, whether online, in the lab, or in an MRI scanner. However, there are aspects of online research that make ADO particularly useful. First, ADO can also be performed across participants, using previous respondents' choices to guide which questions an experiment asks of new participants. With the high data volume possible online, ADO can thus start to payoff quickly and dramatically - probably why it is a staple of data-driven web development in industry. Second, ADO can be used for extra-experimental purposes. For instance, the order in which studies are presented on MySocialBrain.org's homepage could be optimized by observing which studies tend to elicit the most clicks, or which configurations lead to people participating in multiple studies. Thus, ADO could augment not just the value of the data collected, but also the quantity.
Another major feature of the web we hope to draw on through MySocialBrain.org is real human interaction. A great many social psychological experiments involve a single person sitting alone in a room pressing buttons to interact with a computer program - hardly the most social of situations, whatever the content of the study might be. The reason is that multi-person studies are hard to organize: dyadic studies are not too rare, but managing even small real-life groups often requires heroic logistical efforts. The internet makes it easier than ever to bring together large groups of people for scientific research. David Rand's research on the evolutionary game theory behind cooperation is a great example of this. However, much of this work relied on large-scale recruitment from Amazon Mechanical Turk, which is costly, and can recruit participants who are too experienced to participate in some types of studies. If a volunteer-based online platform can gain sufficient popularity, it might circumvent these limitations of Mturk.
Large scale online interaction experiments offer unparalleled opportunities to study human social behavior. However, they also pose major challenges to ethical researchers. If there's one thing even the most causal user of the internet should know, it's that people can be horrible to each other, particularly under the veil of anonymity. Any study that involves genuine social interaction must thus be especially careful to manage participants options such that they cannot inflict harm upon each other. One of the easiest ways to achieve this is to limit the range of possible actions participants can take. For example, allowing participants to communicate freely through a chat feature is almost sure to be seriously abused sooner or later, but if participants have a limited range of discrete behavioral choices (e.g. cooperate vs. be selfish) then the possibilities for abuse shrink dramatically. In the end, we believe that such issues can be addressed, and that science will benefit as a result.
Finally, we want to make our studies intrinsically fun! The average survey or behavioral experiment is a bit of a bore from the participant's perspective: repetitive, unrewarding, and unedifying. Often it has to be this way, but not always. For example, a recent phenomenon in dementia research - Sea Hero Quest - is a simple but fun game involving navigating a small ship. Less than a year after it's release, the mobile app had been downloaded over 2.7 million times, furnishing its creators with an unprecedented data set. With a little creativity, many interesting psychology studies could be "gamified" in a similar manner. This process would benefit both researchers and participants: researchers would get more data, and it would likely be of higher quality due to participant enthusiasm and emersion. Participants would get to have an intrinsically fun experience and contribute to science at the same time. Gamifying experiments will admittedly often require more work than otherwise, but we think it worth the trouble. Moreover, it synergizes naturally with the other advantages outlined above, such as ADO (game AI could titrate to player ability and preference) and interpersonal interaction (multiplayer games).
Thank you for reading this post summarizing the history of MySocialBrain.org's development, and outlining its future. As you have read, we have come a long way to the current version of the site, but we have even further to go to fully realize our vision. We hope you join us, enjoy and learn from our experiments, and watch us grow in size and scope!
© 2018 Mark Allen Thornton. All rights reserved.