Nefarious notifications: Social media algorithms and how they drive us

The average person has 35 apps installed on their phone and spends over two hours per day on social networks. When social media originally took off in the early 2000s, its purpose was for users to connect with friends and family, share positive moments, and seek interesting information about the world. Today, social media platforms like Facebook and Instagram provide endless reels of curated content, on-demand information, and emotional connection. Over the past two decades, these platforms have undergone a dangerous paradigm shift: They are in a race for human attention. 

It is now an oversimplification to say that TikTok, Youtube, and Twitter make money merely through advertisements. More accurately, these platforms profit from user engagement, essentially striving for the greatest proportion of the user’s screen time. How often people search for something, stare at a photo, what users save for later — these are the metrics, the big data, that technology companies and advertisers are truly after. The basic premise of big data is that its sheer volume guarantees a statistically precise model every time. Social media companies use big data analytics to make increasingly accurate predictions of user behavior, such as the next Youtube video that will keep a user glued to their screen or the next pair of shoes that is eagerly added to their cart. 

“Over the past two decades, these platforms have undergone a dangerous paradigm shift: They are in a race for human attention.”

Even the most self-aware technology users can fall prey to this type of artificial intelligence, which can identify patterns in personal usage data to understand users better than they understand themselves. By knowing user preferences so well, machine learning algorithms are able to present advertisements that are statistically extremely likely to elicit user engagement, whether that entails liking a post, making a purchase, or sharing content with friends. 

Not only do technology companies leverage personal data, but they are designed in a way to keep users on the platform, effectively churning out more and more information about them. To boost screen time, social media exploits human psychology in many subtle and conspicuous ways from subliminal updates to high-energy clickbaits. For instance, the “refresh” function available on virtually every social media provides users with the potential for instant gratification, such as through an image of someone attractive or a highly anticipated message. Psychologists refer to this type of reward delivery as positive intermittent reinforcement, and it occupies the same neural circuitry as casino slot machines. Like with gambling, users are not rewarded with every refresh or notification check — this is predictable and not exciting. Rather, the unexpected and sporadic nature of social media triggers a much greater positive response in reward centers of the brain. 

“To boost screen time, social media exploits human psychology in many subtle and conspicuous ways from subliminal updates to high-energy clickbaits.”

There are both personal and societal consequences of such chaotic interactions between the brain and social media. Generation Z, the first generation to grow up with technology from middle school, is reported to be more anxious, depressed, and likely to experience suicidal ideation. Furthermore, the illusion of perfection is easily enabled by editing features in social media, which promotes the pursuit of unrealistic appearance ideals and self-comparison to peers. Technology platforms have also fueled global issues such as political polarization, election fraud, and fake news. For instance, in 2016, Twitter and Instagram joined Facebook and Youtube in abandoning chronologically ordered feeds, instead replacing them with an algorithm that places the most profitable (i.e., attention-grabbing) content at the top of the feed. Not only did this update maximize content from close friends, it also prioritized other captivating content such as clickbait and conspiracy theories. Because the algorithm selects content that best aligns with an individual’s personal data, users were insidiously pushed into their own filtered, polarized worlds. 

Individuals can minimize the emotional consequences of social media by using it as minimally and consciously as possible. Uninstalling apps, turning off notifications, and installing browser extensions to remove personalized recommendations are excellent ways to rewire the neural circuitry shaped by excessive social media use. Individuals can also be instigators of rejecting misinformation by fact-checking before sharing something online and avoiding clickbait. Actively seeking opposing perspectives, such as through following people who you disagree with, is also an effective method to avoid entering a polarized bubble.  

In less than two decades after the advent of Facebook, social media has dangerously proliferated into an agonist for loneliness, distraction, polarization, and misinformation. It must be recognized that the design of technology platforms enables this type of behavior and that, before we see a long-term shift towards more ethical technology, individuals are largely responsible for managing the role of social media in their lives. 

Sources

Int J Environ Res Public Health (2019). DOI: 10.3390/ijerph16142612

Clinical Psychological Science (2017). DOI: 10.1177/2167702617723376