If you notice anything you think is an error in the book, please email me at chasingthescream@gmail.com and I will look into it and thank you on the website if you are correct. I’ll post the corrections here as they start to come through.
Posted on January 10th 2024
In December 2023, I received an email from a conservative journalist writing for a website. He said he wanted my response to a series of accusations against my books. When I read the accusations, it was clear that he had taken them from Twitter, and they contained such basic errors about what the books say that they could only have been written by people who haven’t read my books. I sent a detailed reply to the journalist, and after looking at the facts, he has clearly decided not publish these untrue claims. But since these falsehoods are still there in the more obscure corners of the internet, I thought it might be useful to post the facts about them here too.
1. He alleged: “You misrepresented the results of a study showing that Carnegie Mellon University Students performed, on average, 20% worse on a test when they were being bombarded with text messages.”
The claim that I did not accurately represent it comes from the historian Matthew Sweet, who invented it on Twitter. Sweet’s claim – and other claims by him – have been independently investigated. After the Guardian/ Observer ran an extract from my book, Sweet put this and eight other claims to the Guardian’s readers’ editor, Lis Revans, demanding the newspaper issue corrections to the extract. Revans then took several weeks to read all the relevant scientific studies. After this extensive process, she ruled that all of Sweet’s criticisms of the book were meritless, and warranted no corrections.
(I do not know Revans, and have never met or spoken with her, except by email in response to Sweet’s complaint.)
Sweet made blatant and remarkably sloppy factual errors in his attempts to criticise the book. I will give a typical example. I describe in the book the first scientific study that proved that collective attention really is shrinking. Sweet claimed I had misled people about what the study said. He wrote that in fact if you read the study, even its authors admit there is “no evidence” collective attention is getting worse. It sounded damning; many people retweeted the claim.
Anyone who read the study could see that he had made a very basic error. He had been quoting from the introduction to the study, where they authors explain that there was “no evidence” collective attention was getting worse – until they carried out the study. They then found precisely that evidence. Here is a blog post written by one of the authors of the study, Professor Sune Lehmann, explaining why what I wrote is totally accurate, and that what the historian wrote is “misleading,” “full of errors,” and “bullshit”. I recommend reading Professor Lehmann’s deconstruction of Sweet’s claims in full: https://sunelehmann.com/2022/01/09/oh-twitter/
Sweet’s claims about the Carnegie Melon study are similarly false: I describe it accurately in the book, as the independent investigation of his claim found. I chose to highlight it because its finding – that being interrupted by technology harms attention – is absolutely typical of the wider scientific evidence about classroom-based multitasking and how it damages attention. For example, I direct readers to a key book named ‘The Distracted Mind’ published by M.I.T. Press, written by the leading neuroscientists Professor Adam Gazzalley and Professor Larry Rosen (who has arguably done more empirical research on college students’ attention and how it has been affected by technology than anyone else in the world). I interviewed both scientists. Rosen and Gazzelley summarise the findings of the whole field (on p126 of their book), saying that “technology-related multitasking in class… has been studied extensively, with researchers linking nearly every type of in-class technology use – including email, texting, laptops, social media, and more – to decreased classroom performance, regardless of how that performance is measured (grades, work productivity, etc) and across all grade levels ranging from elementary school to college.” (my italics). They go into much more detail in the chapter.
2. The journalist claimed that “A study on the nature of task-switching published in a book by Jean Twenge in 2017, which is relied on in respect of conclusions on the limits on our ability to focus,” is not described accurately in the book.
This criticism is untrue. Here is the study: https://academic.oup.com/joc/article-abstract/64/1/167/4085996
Here is the text from my book, p.10, describing it:
“For example, a small study investigated how often an average American college student actually pays attention to anything, so the scientists involved put tracking software on their computers and monitored what they did in a typical day. They discovered that, on average, a student would switch tasks once every sixty-five seconds. The median amount of time they focused on any one thing was just nineteen seconds.”
Anyone reading the study, and then reading my description of it, can see that I have described it entirely accurately.
It is offered in the book as one small example of a much wider body of evidence. I directed the journalist back to Gazzalley and Rosen, as I do with readers of the book. Summarising the findings of the field, they write (p123 of their book): “We can no longer focus in the classroom or the workplace; nor can we resist the pull of responding to alerts and notifications while we are supposed to be spending time with our family and friends.” They then describe a study Rosen carried out (p124) which showed that “the typical student couldn’t stay focused on work for more than three to five minutes.” They give further examples p.124-7. This is just one example of a broad range of scientific evidence that I rely on for my conclusions.
3. The journalist claimed: “The study commissioned by you and the Council for Evidence-Based Psychiatry from YouGov to carry out an opinion poll into attention in the United States and Britain, which is not, as alleged, the first scientific opinion poll on this subject matter, and which does not, as alleged, show that the overwhelming majority of the US and UK population are unable to “give attention to things that matter””
This is what I write in the book about the poll, on p171-2:
In early 2020 I decided to team up with the Council for Evidence- Based Psychiatry, and together we commissioned YouGov—one of the world’s leading polling companies—to carry out (so far as I can tell) the first scientific opinion poll ever conducted on attention, in both the U.S. and Britain. The poll identified people who felt their attention was getting worse, and then it asked them why they believed this was happening. It gave them ten options to choose from, and asked them to select any and all that they felt applied to them. The number-one reason people gave for their problems focusing was not their phones. It was stress, which was chosen by 48 percent. The number-two reason was a change in life circumstances, like having a baby or getting older, also chosen by 48 percent. The number-three reason was difficult or disturbed sleep, which was named by 43 per- cent. Phones came fourth, chosen by 37 percent.
Every word of this is accurate.
The journalist bizarrely claimed that this section is being offered by me as evidence for a statement I make 163 pages earlier, on page 9 of the book, where the words he quotes, supposedly in connection to the poll, actually appeared. There, I describe watching the crowd in front of the Mona Lisa in the Louvre, who wrestled and jostled their way to the front, only to turn around and take selfies. I write:
“This seemed to fit with a much wider sense that had been settling on me for several years—one that went well beyond bad tourist habits. It felt like our civilization had been covered with itching powder, and we spent our time twitching and twerking our minds, unable to simply give attention to things that matter.”
In that part of the book, I am describing my impressions of a scene I had directly witnessed. I make it clear I am describing personal impressions: “It felt like…” These impressions have nothing to do with the poll I describe much later in the book, and the poll is not presented as evidence for those impressions. Only a wilful misrepresentation could claim otherwise.
I offer a broad range of evidence in the book that people are struggling to give attention to the things that matter, but this poll is not one of those pieces of evidence. It is offered to explain what people who are experiencing this problem believe is the cause.
The journalist claimed this is not the first scientific opinion poll conducted on this topic. I asked him to send me a link to any earlier opinion polls he is aware of, he had did not do so. I looked very carefully for one when working on the book and never found one. I asked several experts, and none were aware of one. Because I could not be 100 percent certain, I carefully caveated my description of the poll: “together we commissioned YouGov—one of the world’s leading polling companies—to carry out (so far as I can tell) the first scientific opinion poll ever conducted on attention.” (New italics.) So I clearly flag up to the reader that it is possible that there could have been another poll, but I had made a good faith effort to locate info about it and was not able to. Nobody has yet directed me to another one, and I do not believe one exists, but I am happy to be corrected if there is one.
4. The journalist claimed that I ignored or did not engage “with studies which contested the results of those you chose to rely on, for example in support of your conclusions on radicalisation via YouTube.”
Only somebody who has not read the book could make this criticism. The book engages with countervailing views throughout, and has been widely praised for this quality. Indeed, one of the most-discussed parts of the book is an extensive interview with Nir Eyal, a tech designer who holds strongly countervailing views, which I discuss with him in great detail, and give a great deal of space to in the book. Indeed, on the specific example the journalist singled out as an area where he says I have failed to do this – YouTube radicalisation – anyone who has read the book would know that in fact I engage explicitly with the countervailing evidence on this very point at some length. This is what the book says on p155:
“I’ll give you one detailed example, so you get a sense of this controversy. Tristan [Harris] argues YouTube is radicalising people, based on an array of evidence I mentioned before. Nir [Eyal] responds by pointing to a recent study by the coder Mark Ledwich that suggested in fact, watching YouTube had a slightly deradicalising effect on its users. Tristan, in response, directs people towards the Princeton academic Professor Arvind Narayanan, and many other critics of this study, who say that the research Nir is citing here is worthless. Let’s go through this, step by step. The people who say YouTube radicalises you argue that this effect happens over time. You create a profile, you log in, and gradually YouTube builds up knowledge of your preferences, and to keep you watching, the content it feeds you gets more extreme. But the research Nir cites didn’t study any logged-in users. All they did was go to a video on YouTube – say, Boris Johnson giving a speech – and without logging in, they looked at the recommendations that appeared along the side. If you use YouTube in this highly unusual way, the videos don’t become more extreme over time, and it might be fair to say YouTube is deradicalising. But huge numbers of YouTube users do log in. (We don’t know exactly how many, because YouTube keeps that information secret.)”
I then provide the relevant studies in the endnotes, on p309:
M. Ledwich and A. Zaitsev, ‘Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalisation’, arXiv:1912.11211 [cs.SI], Cornell University, 2019.
https://arxiv.org/abs/1912.11211
See also A. Kantrowitz, ‘Does YouTube Radicalize?’, OneZero, 7 January 2020. https://onezero.medium.com/does-youtube-radicalize-a-debate-between-kevin-roose-and-mark-ledwich-1b99651c7bb; W. Feuer, ‘Critics slam study claiming YouTube’s algorithm doesn’t lead to radicalisation’, CNBC, 30 December 2019, updated 31 December 2019. https://www.cnbc.com/2019/12/30/critics-slam-youtube-study-showing-no-ties-to-radicalization.html
Nobody who has read ‘Stolen Focus’ could claim that it doesn’t engage with countervailing viewpoints, or that it doesn’t present countervailing views and studies about YouTube radicalisation.
This is why, to my knowledge, all of these claims have been confined to Twitter, and have not appeared in any publication that has a fact-checking process. Elementary fact-checking shows that they are untrue.
Posted on 2nd March, 2022. Professor Roxanne Prichard is referred to as being at the University of Minneapolis. It should say that she is at the University of Minnesota – Twin Cities, in Minneapolis. That will be corrected in future editions.
Posted on 25th April, 2022. On p149, I say of Nir Eyal’s book ‘Hooked, “the CEO of Microsoft held it aloft and told her staff to read it.” This should say “a senior executive at Microsoft”, not the CEO. Thank you to Nick Howe for pointing out this error.
Posted on 12th June, 2023. On p149, I say of Nir Eyal’s book ‘Hooked, “the CEO of Microsoft held it aloft and told her staff to read it.” This should say “a senior executive at Microsoft”, not the CEO. Thank you to Nick Howe for pointing out this error.
Posted on August,15th 2023. In some early editions of the book, I refer to Stephen Hinshaw as a professor at Stanford. He is a professor at UC Berkeley, where I interviewed him. Thank you to Alexandra Apple for pointing out this error.