Photo by Jacob Ufkes
It is both apt and ironic that we, the owners of smartphones, are known as “users”. For as our smartphone use becomes increasingly compulsive, it starts to look more and more like we are the ones being used.
In October 2017, Sean Nakamura completed the inaugural Moab 240 Endurance Run—a 238-mile foot race through some of the most challenging terrain in Utah. Against a cut-off time of 112 hours, or 4.7 days, Sean finished in 67 hours, 50 minutes and ten seconds. In other words, he ran for almost three days—an astonishing feat. But even more astonishing is that this was only good enough for second place. In first place was Courtney Dewaulter, who could have had a shower, a leisurely dinner and a full night’s sleep before Sean finished ten hours later. If Sean’s achievement is incredible, Courtney’s is almost unbelievable. And yet, the Moab 240 will never make much money, if it does at all.
Where our attention is directed determines how we spend our time. The Moab 240 will not generate significant profits because, despite the extraordinary feats on offer, it is fairly difficult to convince an audience to pay attention to some people running slowly through the middle of nowhere for days on end. The profitability of experiential products has always turned on their ability to capture and maintain attention, from gladiatorial combat and concerto performances to comedy shows and sporting events. Until recently, this has not been a problem. Experiential products have always been limited in terms of length, quantity and availability, and so the extent to which they can lay claim to our attention—and therefore our time—has been similarly limited. There is only so long we can spend at stadiums, in cinemas and even—despite the worried noises parents have made over the years—in front of the television.
But things have changed. The introduction of the iPhone ten years ago started a revolution that has culminated in the majority of us having an Internet-enabled device to hand all day, every day. This is not an inherently bad thing. Smartphones have placed the sum of human knowledge at our fingertips, allowing us to be better informed than ever before. They let us keep in touch with friends and family wherever they are on the planet. They have even saved lives. But with virtually unlimited access to our attention, the smartphone has had some unintended and undesirable consequences.
A 2016 study1 found that the average person racks up 2,617 screen touches a day over sessions adding up to 145 minutes. The top 10% of people averaged 5,427 touches over 225 minutes. A 2015 study2 of Swiss students found that 16.9% of them were addicted to their smartphones3. Another 2015 study4 found that high smartphone use was correlated with increased depression, anxiety, and poor sleep quality. Although the spectrum of effects is broad, most of us can at least recognise the feeling of compulsively checking our phones one more time, opening one more app or consuming one more piece of content; realising that we did so before we were even aware of it; and regret over what we felt to be an excessively long smartphone session. How did we get here?
Whether it be the developer, the creator, or both, someone expects to make a profit from publishing content. And yet we have come to expect free access to most of it, from simple tweets, pictures and status updates; to professional writing, video and audio. If customers pay for products, then we cannot be the customers nor content the product in this picture. Instead, creators and platforms usually make money by allowing businesses to place ads in and around their content, which makes advertisers the true customers. And we are the target of those advertisers, which makes us the true product. As a result, behind every app are a thousand engineers whose job it is to manipulate us into giving their content—and their advertisers—as much attention as possible.
Take, for instance, notification design. This is important to developers because notifications make us think about their app when we otherwise would not, which makes every one an opportunity to capture our attention and get us to spend more time in that app. Say we open an app to post a selfie, and people start Liking it. These Likes generate notifications, encouraging us to visit the app again to see who Liked our selfie and to check on the running total. And because each of those Likes gave us a lovely hit of dopamine, we may then end up opening the app a few more times to check our Likes before we even receive another notification, refreshing our feeds like a gambler on a slot machine.
Given how effective notifications are at getting us to visit apps, you can see why developers want us to see new ones as regularly as possible—which takes our slot machine analogy even further. The addictiveness of a slot machine lies in its unpredictability. The manufacturers may know that their machine pays out five times out of a hundred, but they will make sure those five times are intermittent rather than on every twentieth spin and that the size of the payout varies. Our selfie also provides rewards that are intermittent—in terms of whether anyone has Liked it since we last checked—and variable—in terms of how many Likes we got. Developers can also use algorithms to optimise that intermittence and variability, perhaps by making sure that people do not miss the chance to Like our selfie even if they have not checked their news feed in a while (which is partly why they are no longer chronological)5. All this increases our compulsion to check our phones in case we received any new Likes, Comments or something similarly exciting, even if the notifications we do see are often more mundane. Which, increasingly, they are—notifications are so effective that developers are incentivised to invent new things to tell us about, even if they are not particularly interesting. While I was writing this paragraph, I received a notification that read: “Barack Obama tweeted after a while. See what they said.” So, I immediately jumped on Twitter to see what he had said, which helpfully demonstrates that these tricks are so effective and subtle that apparently I am susceptible to them even as I am writing about them.
Apps can even learn from the way you consume content, tailoring their interventions to best hack your unique psychology. Every time you Like, Comment, click or even hover over a piece of content, you are telling the app’s AI about what pushes your buttons. This is not the same as telling it what we get the most value from. We might well appreciate seeing an informative article concerning some world event we have been curious to know more about. But, chances are, we are more likely to be engaged by simpler content like a funny cat video, yet another iPhone unboxing, or a totally relatable meme, depending on who you are. And “engage” is a neutral term. It could be something that amuses us, but it could also be—and often is—something that pisses us off. Making us happy is not the only, or even the most effective, way to get our attention.
Content creators also want engagement, whether it be for the ad money in the case of the professionals, or just for the dopamine hits in the case of you and me. The result is a kind of Darwinian natural selection in which the features of more engaging content—such as simplicity, outrage and reinforcement of audience beliefs—proliferate, and those of less engaging content—such as nuance, moderation and challenge of audience beliefs—dwindle. But while the style of content has become more homogenised, the substance of content remains as diverse as ever. What has changed substance-wise is not what there is, but what we see. For instance, people on the political Left tend to be engaged by content that reinforces their Left-wing views and provokes outrage against those on the Right whom they disagree with. Meanwhile, people on the political Right tend to be engaged by content that reinforces their Right-wing views and provokes outrage against those they disagree with on the Left. Same style, different substance. And because there are lots of people on both sides of the aisle, there is a big market for both kinds of content. But you would be forgiven for not knowing it. Chances are almost all of the content you see will be geared towards your position on the political spectrum, because apps learn what we are engaged by and show us more of it. The danger and, as it turns out, the reality, of this situation is that we can be led to think that most people agree with us on most things. This is a big part of why the results of the EU referendum in the UK and the 2016 Presidential Election in the US were such a shock to people on the Left.
So, not only are developers and creators incentivised to hijack our attention and eat up our time, but the very technology that promised to bring us closer together is, in some ways, dividing us. I do not know how we fix this problem, but I have a view on where we should probably start. As a species we have evolved powerful urges that served us well in our natural habitat, but often lead us astray in the modern world. This is why many people who know precisely what they need to do to lose weight find themselves perpetually doing the exact opposite, even when it is a matter of survival. Right now, as profit motivates drug dealers to create addicts, it is the advertising model that incentivises developers and creators to abuse similar bugs in our psychology. So, if we want better outcomes, we need better incentives. And if we want better incentives, we need to start by rethinking or replacing that advertising model.
In the meantime, there are things we can do as individuals to make technology meet our ends, instead of those of advertisers. First, we can take a step back and understand what we as individuals actually want those ends to be. Personally, to name a few, I want to stay informed, but not overwhelmed. I want my views to be challenged, so that I can change them when I am wrong and strengthen them when I am not. I want to avoid distractions when I am working and socialising. And I want my psychological bugs to be unavailable for exploitation.
Next, we need to set up our technology to meet those ends—which may sometimes involve some hacking of our own. For example, I use an app called Freedom to temporarily block apps and websites I find distracting; which you can do ad hoc or, as I do, by creating a schedule (because I am weak, mine only lets me access social media for an hour a day every weekday evening, and not at all on weekends). I keep my feeds pared down by unfollowing a lot of accounts I do not get value from. I expose my views to challenge by intentionally following people and organisations whose opinions and perspectives differ from my own; which, as a happy bonus, also serves to confuse the algorithms.
However you set up your technology it is, I think, important to do so deliberately. The term “users” is dehumanising; and yet, given how the industry is incentivised to get us to behave, it is perversely appropriate. Perhaps if we stop playing ball and start to use technology more intentionally, change will start with us.
If you enjoyed this essay, you can support me by sharing it with your friends. Building an audience is tough, so this really does help. Thanks in advance. JM
- dscout https://blog.dscout.com/mobile-touches ↩
- Haug et al. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4712764/ ↩
- As measured by the Smartphone Addiction Scale (Short Version), which accounts for five factors: ‘daily-life disturbance’, ‘withdrawal’, cyberspace-oriented relationship’, ‘overuse’ and ‘tolerance’ ↩
- Demirci et al. https://www.ncbi.nlm.nih.gov/pubmed/26132913 ↩
- Tristan Harris https://journal.thriveglobal.com/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3 ↩