There has been a lot written and said about the effects of smartphone use in children, and, in particular, the use of social media applications in recent years.
Jonathan Haidt is a thought leader in this space and is helping to galvanise the smartphone free childhood (SFC) movement that is taking off in the Anglo-sphere. I have just finished reading his book The Anxious Generation, and I feel it is incredibly well-researched, with excellent and fair use of data to land its points. In fact, the last hundred or so pages of the book (roughly 25% of the book) are for references.
There is even an accompanying website for extra data, research and graphs that could not fit into the book!
This book is a must read for anyone who has children, but also incredibly revealing about our adult selves as well:
After more than a decade of stability or improvement, the mental health of adolescents in many countries around the world deteriorated suddenly in the early 2010s. Why have rates of depression, anxiety, self-harm and suicide risen so sharply, more than doubling in many cases?
In this book, Social Psychologist Jonathan Haidt argues that the decline of free-play in childhood and the rise of smartphone usage among adolescents are the twin sources of increased mental distress among teenagers.
Haidt delves into the latest psychological and biological research to show how, between 2010 and 2015, childhood and adolescence got rewired. As teens traded in their flip phones for smartphones packed with social media apps, time online soared while time engaging face-to-face with friends and family plummeted, and so did mental health. This profound shift took place against a backdrop of diminishing childhood freedom, as parents over-supervised every aspect of their children’s lives offline, depriving them of the experiences they most need to become strong and self-governing adults.
The Anxious Generation reveals the fundamental ways in which this shift from free-play to smartphones disrupts development – from sleep deprivation to addiction – with separate in-depth analyses of the impact on girls and boys. Grounded in ancient wisdom and packed full of cutting-edge science, this eye-opening book is a life raft and a powerful call-to-arms, offering practical advice for parents, schools, governments, and teens themselves.
I had read about half way through the book before I took the vacation I mentioned recently, and it’s part of why I wanted to have a blackout in terms of social and comms on my phone as much as I could. I did manage to put my phone on do not disturb for the week. I had already been planning to write this post for a while, but the numbers from the book are shocking, and are a real call to action for me. They’re everything we suspected and worse.
The SFC movement is really taking off and I’m part of it in my local area. While many of my kids’1 friends have smartphones now, we have chosen not to follow and will continue with other like-minded parents in this movement locally.
I know what you’re thinking though:
I can see how this is somewhat related to data, but how is this really relevant to data folks?
A lot of what has been written about this topic and what has gone wrong, is in terms of the end product and its effect on people. I want to peel back a layer and talk about the thought processes that feed into the product, and what we data folks, in terms of data and machine learning engineering (DE & MLE), have contributed to it.
In my time as a data person and leader, I have worked in online grocery, payments, Fintech and e-commerce marketplace, before entering Data SaaS startups, where I remain.
Payments and Data SaaS are B2B industries, and you don’t interact with customers in the same way as B2C - they truly are customers in this regard.
At the Fintech lender I worked at, because of how highly regulated consumer credit was and how the products were naturally easy to become dependent on (not psychologically, but financially), we didn’t consider our customers in terms of engagement too much - we didn’t have to work hard to get them to come back.
At the online grocer, we sold something necessary that people buy at an expected cadence and at an expected quantity - the amount of food they buy is pretty consistent and really we were competing to retain our customers from our competitors and not for their attention. Grocery is one industry where capitalism seems to work well in the UK - people have choices about where to shop with standard go to market strategies from the grocers (cost leadership, premium, bundling, niche), and they use these to attract and retain customers - competition is real and fierce.
It was at the e-commerce market place, called Lyst, where I really started to learn about user engagement. Early on in my time there, I heard about what was considered a big problem at the time - one and done (1nD). 1nD is where a user would find Lyst, organically or otherwise, choose the product they wanted to buy2 and buy it, not returning or engaging again, or at least not for a long time.
This was considered a problem because it meant, in order to be sustainable, we were more or less constrained to keep our customer acquisition cost (CAC) under the average profit per customer segment for these short-term customers. We essentially had to acquire customers who were CAC-profitable almost immediately or within a short period, like a month. While this was actually possible with the right marketing strategy, it was problematic, as investors didn’t like this kind of behaviour in users.
Lyst’s hypothesis was that a retentive product could be built using ranked feeds and search, mobile apps, collaborative filtering, vector-based customer grouping, social features… and so on. The idea was that such a retentive product would result in users who would come back to our site and apps on a regular basis, mostly for entertainment (like reading a fashion magazine) but also buying regularly. If these users would stay with us and buy regularly, then their lifetime value would be much higher than 1nDs. With this higher value, we could afford to acquire customer segments for a much higher CAC and therefore justify higher spending on marketing and then higher growth… and on and on and on.
Lyst wasn’t really targeted at children at all, we didn’t focus on offering children’s products and high end fashion doesn’t make sense on a segment that outgrows their clothes every few months and has no disposable income of their own. So the harm caused if we had achieved this high engagement product would probably be low, if at all measurable.
However, the concepts originate from social media and other harmful products that do attract and affect children badly. The idea of acquiring engaged “users” is well explained in Nir Eyal’s Hooked - hacking their psychological reward process in a similar way to a slot machine; pull the lever and get occasional, random reward.
While it’s not wrong for data folks to work on products that increase engagement and retention of users, the harm of the product should be something we consider. I still remember to this day how a CXO member at Lyst reminded us that we called them users and not customers for a reason - we wanted to capture a share of their attention for the long-term, whether you call this addiction or not.
As described in the The Anxious Generation, social media companies have very successfully used this method with like buttons and ranked feeds of content. None of this is possible without us data folks, though. Collecting the event data about user behaviour, measurement of the efficacy of product changes, building ML models that rank feeds or make recommendations to users, monitoring and optimising these models to maximise engagement metrics - all requires the efforts of data folks. We power a big component of how companies that have users keep them engaged and coming back for more.
The other component is product management and choices that PMs make in choosing strategies and features to increase engagement in addictive ways. At Lyst, we had a team dedicated towards making sure our feeds were as relevant to the user as possible. Whenever engagement metrics dipped for these feeds (which were one of the most important product features we had for conversion and retention), it triggered a deep dive into whether our ranking models were still performing or whether there was a macro/other reason/s for the drop-off in performance.
In The Anxious Generation, ranked feeds are mentioned, but I want to focus on them more. When I first encountered Facebook, it was 2006 in my college dorm=student halls room. There was no ranking - everything came in chronologically and the content was entirely from your connections - you might have seen other people who were their friends (and not yours) interact, on your connection’s content, but this was the limit to how much you would interact with non-connections.
Most people would open their Facebook tab (I think we had tabs back then), look through what their friends had posted (mostly photos of them on student nights out) and like or comment on these posts. Then you would stop, and go live your life in the real world. There was nothing else to see, you would check back again later or the next day for new posts from your friends.
Not too long after this, the like button was released, and as much as this is blamed for a lot of problems, I actually think it was innocuous in and of itself. In the beginning of the like button era, you just saw that a few of your friends had liked each others photos or posts - it wasn’t harmful. The real problem with the like button was how explicit an indicator of engagement it was - it’s much easier to be sure someone liked the content than if they hovered3 over it a while (were they actually looking or did they get distracted by something else?).
The data from the like button and other explicit forms of engagement was subsequently used to rank content - to decide the order of your feed in order to maximise the time you spent on it. Initially, this wasn’t too problematic, as it just showed you what content was popular amongst your friends - it was a bit annoying that you missed some content from friends who were less popular, but not disastrous.
This initial ranking system, based on relevance, then led to even more sophisticated systems that could predict what levels of engagement specific content would have when shown to specific people. They even experimented with manipulating the emotions of user groups - very successfully.
The researchers, led by data scientist Adam Kramer, found that emotions were contagious. "When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred," according to the paper published by the Facebook research team in the PNAS. "These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks."
This leads us to today, where the content that is shown in the feed is that which is most likely to be engaged with. As extremes of positivity and negativity are much more likely to be engaged with than other more emotionally-neutral content, these types of content are pushed to the top of your feed, accelerating these emotions through society (most of the western world is on social media).
I remember watching some of my connections change their behaviour as they became hooked. They would post more and more emotionally-charged content, whether that was towards rage, sadness or joy. Many were subconsciously programmed to perform in this way from the feedback loops.
The Facebook founders purposefully created something addictive, the social network's first president told Axios in an interview.
“God only knows what it's doing to our children's brains,” Sean Parker said in the interview published Thursday.
With each like and comment, Facebook is “exploiting” human psychology on purpose to keep users hooked on a “social-validation feedback loop,” Parker said, adding that it is “exactly the kind of thing that a hacker like myself would come up with.”4
The final straw for me was the addition of content from non-connections in my feed. I don’t think this is talked about enough, but it is so important because it effectively connects the whole world and allows this emotional contagion to spread incredibly quickly and globally. Someone likes something… one of their connections likes it… one of their connections likes it… one of your connections likes it… then you see it. Once you go to 5 or 6 steps of connections in society, you’re more or less connected to the whole world.
This combination of predictive ranked feeds and content from non-connections (PRFNCC) proved toxic enough to persuade me to leave social media. I remain on LinkedIn as a professional network, although it increasingly has social media-type behaviour.
What I’ve described above is how social media, through the products only possible with data and the work of data folks, has been harmful to adults. As many of you have seen and with the testimony of whistleblowers like Frances Huygen, the effect of PRFNCC is much more harmful to children. The Anxious Generation covers this in great detail and clarity. Children are effectively being groomed and hooked to perform on a global stage by social media, through the effects of PRFNCC.
If you read the book, it differentiates between the effects on adolescent boys and girls, where boys are less affected by social media, but more by online multiplayer games. I was part of the generation that had a games console from very early on. I had a mega drive at the age of 7, then an N64, PS2… until adulthood. While these systems did consume a lot of time, they didn’t really have any social features. If I had finished my last new game, I would often not play on my games console for weeks, as there was nothing to do. If you wanted to play with friends, they had to come over or vice versa.
This changed with high speed internet and online games like Counterstrike and CoD, but they didn’t really provide social media features, just servers to go play with randomers, friends or both. You had to be organised with your friends and any online folks you had recruited for your clan, to play together. This was difficult and like herding kittens - by no means an addictive social experience. The love of the game and the thrill of possibly winning was why we came back again and again.
I think the harmful effect of video games we’re seeing today is actually the added social media features geared towards adolescent boys. I don’t believe it’s the games content itself, as this was around well before 20105. In reality, the games content hasn’t changed very much since the advent of 3D gaming on the PS1 - the graphics and gameplay have gotten richer, but that’s mostly it.
When my son talks about what his friends at school have been saying about video games, they talk about things like how one of his friends added a connection without knowing for sure who it was (thankfully it was another friend), they talk about how one of the group had been annoying and the others had blocked him. While messaging groups don’t have ranked feeds, they have also been really harmful to children, and from what I have heard from the children at my son’s school, video game social media features function somewhere between these messaging groups and fully-featured social media.
I know of many engineers and data folks who would refuse to work in the social media industry because they believe it’s harmful for people and, in particular, children, for a similar reason as to why most won’t work in the tobacco and gambling industries.6 However, the tobacco and gambling industries have real and significant barriers to children having access to their products. Compare this to social media, where anyone who “says” they are over 13 can have access.
As the gambling industry has shown, it’s entirely possible to create significant barriers to underage users having access to their products online - they know it would be the end of the road for them if they didn’t achieve this. Social media companies don’t have this fear, as there is no real legal or financial consequence to them yet.
When social media companies say they can’t prevent under-13s having access, they really mean that it would hurt profits and allow their competitors to acquire these users instead of them. We certainly have many technological means. You could require identification, a verified adult parent to confirm age and even use non-deterministic methods like AI age image recognition to create barriers, plus many other ways. This is well covered within The Anxious Generation, too.
I struggle with my own feedback loops on LinkedIn - it’s hard not to get high from highly-engaged content you put out there and feel low when something you post isn’t interacted with much. I also struggle with my feed of Google news and scrolling for too long and often, but at the same time, I use it as something to do when I’m bored and find interesting and useful things there (which is why I haven’t given it up, unlike my social media accounts where I didn’t get any such benefits).
Hope
I really feel that 1nD was actually a good thing - we were so relevant and useful that a user got what they were trying to do done quickly and efficiently. We were really treating them as customers in this way, and not users we want to keep scoring. I’m really hopeful that this coming era of AI-enabled applications brings us back towards 1nD. AI applications are well-suited to this, where you have a one-shot chance to solve the user’s problem or a conversation where you can refine the input to the point where you can solve it. I hope we move towards utility and the monetisation of it and away from the user engagement model.
I was one of the first three members of my local SFC group (one of the other two was my wife), but this has since expanded to the point that this morning it needed to split according to the schools in the area!
The state of New York, on Friday, passed a bill to restrict social media companies from using ranked feeds to serve contents to under-18s! This is a huge win, and I hope governments around the US and the rest of the world follow suit.
The Stop Addictive Feeds Exploitation (SAFE) for Kids act will prohibit social media platforms like TikTok and Instagram from serving content to users under the age of 18 based on recommendation algorithms, meaning that, instead, social media companies will have to provide reverse-chronological feeds for child users.
The legislation describes algorithmic feeds as “addictive” and says they negatively affect children’s mental health.
The New York legislation defines an “addictive feed” as one that recommends, selects or prioritizes media based on information associated with a user or their device. It would compel the state’s attorney general to disseminate rules for enforcement. A company found in violation would have 30 days to correct the issue or face remedies of up to $5,000 per user under the age of 18. 7
I’ve often said that if I could have the Facebook product of 2006 to 2008 again, I would consider using it now, but I haven’t been able to since they updated the product. If I were to create an account now and say I was a child between the age of 13 and 17 (where I was the parent), would I be able to get that product back? I don’t think so, because of how most of the user-base have changed their behaviour based on the feedback loops in play. I do wonder what would happen if a big percentage of adults decided to be treated as children and refuse to verify as an adult… could we see a reversal of the effects of PRFNCC?
The problem with asking the social media companies to govern themselves is they can’t do it without giving away market share to a competitor who won’t self-regulate. They need this legislation in order to be able to become safer. It levels the playing field for safety - if TikTok refuse to put in the protections, Meta can complain and TikTok can be banned from operating.
As a data person, I really like how they’ve targeted ranked feeds, as I know this is the mechanism that social media companies use, and we have built, that can be so destructive. We ban the use of alcohol, cigarettes, recreational drugs and many other things from children for their protection… this feels just the same.
I wish the legislation also required that social media companies offer this “reverse-chronological feed” to adult users, even if it is a paid version. I also wish the legislation required that only content from a child’s direct connections could enter the “reverse-chronological” feed, and that children weren’t allowed to use group messaging features, but perhaps this first act passed can be a foothold to add things like this.
There are many things still to be solved even to implement this new act, including age verification and barrier to entry8, but I feel this is a step in the right direction.
My children are 7 and 8 (nearly 9) years old. While not all of their friends have smartphones, we’re really beginning to feel the peer pressure, with some bullying beginning to occur due to the fact they don’t have smartphones. This was expected, but has come along a bit earlier than I was hoping.
Lyst was essentially a fashion search engine, offering products from multiple brands and retailers in one place, with price comparison/optimisation between them.
And other such implicit methods, like scrolling back, slow scrolling, clicking… many of which are tricky to measure from a data point of view.
https://www.washingtonpost.com/news/the-switch/wp/2017/11/09/facebooks-first-president-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains/
We only have an old Nintendo Wii from when we were students living at home - I will most likely cave in and make the upcoming Nintendo Switch 2 our first modern games console for the house, but I intend to use it in a non-networked way as much as possible.
I’m not going to go into the wider harms on our society and democracy, which is also another reason why - it’s a whole other topic of its own that is partly connected to this one.
https://www.nbcnews.com/tech/social-media/new-york-passes-legislation-ban-addictive-social-media-algorithms-kids-rcna155470
In an ideal world, the state would be able to provide a service where the user left the social media site for verification, was verified by ID and then the service would send some kind of token back to the social media site to be stored against the user to denote they were verified as an adult. However, it’s unlikely that most governments would be competent enough to administrate this - but they could subcontract this to a third party and use social media licensing fees to pay for it.