I restarted Homo Deus this week, as I had previously been put off by the reviews which claimed it was far less impressive than Sapiens (which I had enjoyed). But the more I read Homo Deus, the more ambivalent I became about whether or not to continue reading. I felt like Harari’s style could work when applied to topics that I’m already interested in such as AI and big data. Yet, Homo Deus quickly fell from being interesting to infuriating, as I learned just how badly it misrepresented one of its main subjects. (See the The Turning Point section below for details).
The Starting Point
The first few pages impressed me, as Harari painted a memorable picture of an improving world. Within the prologue there are some fascinating ideas and questions that are worth thinking about. Pages 71-74 go in to the history of garden lawns; this narrative shows how ideas about what an object means travel across countries and over time, and how those ideas become unquestioned assumptions baked into our societies and world-views.
Homo Deus also keeps the same casual and accessible writing style as Sapiens. Harari’s explanations contain bursts of flair, like rhetorical devices, assonance, and pop-cultural references, that make the text engaging and occasionally humorous. It almost reads like fiction, and sometimes that’s a downside; its smooth nature meant I found myself sliding from one chapter to another and then suddenly realising that I hadn’t questioned whether the segues and connections Harari made were logical as a result.
As I kept reading, my opinion of the book kept changing even within pages. There are paragraphs which contain thought-provoking questions, and which make predictions that have to some extent been shown to be correct. For example, Homo Deus contains this description of how targeted algorithms might affect voting:
“in future US presidential elections, Facebook could know not only the political opinions of tens of millions of Americans, but also who among them are the critical swing voters, and how those voters might be swung. Facebook could tell that in Oklahoma the race between Republicans and Democrats is particularly close, identify the 32,417 voters who still haven’t made up their minds, and determine what each candidate needs to say in order to tip the balance. How could Facebook access this priceless political data? We provide it for free.”
This was written in 2015, before the Cambridge Analytica-Facebook scandal was widely publicised, but its certainly close to what happened. At the same time, Harari makes other predictions that have already been proven false. Talking about Microsoft’s Cortana, first developed just a year before Homo Deus was written, Harari envisions that;
“Next thing I know, a potential employer will tell me not to bother sending a CV, but simply allow his Cortana to grill my Cortana….As Cortanas gain authority, they may begin manipulating each other to further the interests of their masters…”
That’s already wrong, because Cortana is being switched off from most of its planned customer-focused purposes, even on Microsoft’s own devices. Harari’s rush to ascribe world-changing powers to the 12-month-old Cortana is a red flag that should make readers question how many of his other predictions could be similarly over-hyped.
Thinking about it logically, any book which puts forward this many predictions is bound to get some of them wrong. (At the same, its bound to get some right by pure chance as well). So its probably best to focus on the grander trends and themes across Harari’s ideas, rather than the individual company or piece of software that Harari attaches those ideas to. There are still issues after taking this perspective, because Harari presents his personal ideas of what the future might be as if they’re an objective “the future”. While he does say at one point (pg. 75) that “all the predictions that pepper this book are no more than an attempt to discuss present-day dilemmas”, his tone in the rest of the book makes that caveat seem hollow.
This brings me to the question that needs to be asked of most popular-science and of non-fiction in general; can this be trusted? More specifically – can I be confident that the information I’m reading has been well-researched, presented fairly, and given appropriate context and explanation? For Homo Deus, the answer is no.
Arguably, this could be expected to a point; Harari is making predictions about what could happen, so Homo Deus cannot be judged on evidence quite as strictly as a historical book. But even with this exemption, Harari’s predictions are based on how he interprets, combines and explains ideas that do exist now. So how well is current information, such as existing ideas or study results, communicated? Here, the major problem with Homo Deus can’t be avoided.
The Turning Point
Homo Deus centres on the rise and potential fall of the ideals of “liberal humanism”. So you would expect there to be clear, rigorous definitions of “humanism” and “liberal humanism”, that align with how those terms have been used historically.
Unfortunately, that is not the case, to a shocking degree. Firstly, Harari defines humanism as “the worship of humans”; it takes mere minutes of research to see that he is giving a flatly wrong definition. Harari then lumps together the existing ideas of secular humanism and religious humanism, splits them back out into three branches of humanism; then makes a straw man argument out of “liberal humanism” for the entire book.
In fact, Humanists UK have responded to what they politely call Harari’s “eccentric” view of Humanism. It has proven to be so misleading, and so likely to cause misconceptions, that they have published multiple articles which demonstrate how Harari has misused the term Humanism, and explain what most Humanists actually do believe.
In addition to this shameful misinterpretation of humanism that makes majority of the book no longer trustworthy for me, I can pinpoint the sentence that made me want to give up on reading Homo Deus entirely. On page 360, Harari says:
“…you should probably replace your human pilots and solders with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves they all to often kill civilians by mistake. Computers programmed with ethical algorithms could far more easily conform to the latest rulings of the international criminal court.”
If Harari truly believes that an argument so shallow and reductionist that it would be marked down in an undergraduate essay is the solution to warfare, he is not the person to be writing a book that’s mostly about AI and algorithms.
The only reason I didn’t quit reading there was in case Harari returned to the idea and discussed why the solution wasn’t as simple as his original answer, and why the idea of “ethical algorithms” is complicated. Unfortunately, he didn’t return to the topic during the rest of the book; conversations about ethics in tech are notably absent.
Harari is very knowledgeable about history, but that doesn’t make him an oracle of the future. His ideas are just that; one person’s view of what the future could be like, based on today’s world. By trying to write a history of the whole future, across so many areas of knowledge and society, Harari has restored to shallow takes and home-made definitions that spread faulty interpretations of existing philosophies.
My conclusion is not to read Homo Deus without context. If you have already started reading it, please look for other information about Humanism so that you can see why Harari’s definition and terminology is so misleading. The interesting questions that Homo Deus raises can be found in other AI-focused works like Hello World, while the discussion of free will and whether we have a single unified self was already done better by Baggini’s The Ego Trick.
This is the first time I’ve made an anti-recommendation here, as normally I just don’t write about books if I don’t enjoy them. But Homo Deus isn’t boring or mediocre. Instead, its readable, engaging, and wrong. The fact that its written in an accessible way makes it a bigger problem, because it can spread misinformation more effectively than rigorous factual sources can undo it.