We welcome Mark O’Neill to launch our #scifi #AI series with his review of Ray Bradbury’s 1951 classic, Fahrenheit 451. Much of the discussion about artificial intelligence centres on what machines might do to humanity but is, as Bradbury and O’Neill ask, the more significant concern is what AI might enable humanity to do to itself? 

Fahrenheit 451 is the temperature point the paper in books catches fire and burns. Guy Montag is a fireman in a post-literate future world on the brink of war. His job is to burn books, forbidden because they are the source of all discord and unhappiness. The Mechanical Hound of the Fire Department (a lethal robot with limited AI and a lethal hypodermic needle) tracks down and kills dissidents who defy society by preserving and reading books.

‘Happiness’ comes from satiation with drugs and a constant stream of short-form ‘infotainment’. This is piped into domestic TV parlours on multi-wall-sized screens and into people’s ‘unsleeping minds’ on little ‘Seashells’ in their ears: ‘an electronic ocean of sound, of music and talk and music and talk coming in’.

Beatty, Montag’s boss, gives insight into the state of affairs:

Digest-digests, digest-digest-digests. Politics? One column, two sentences, a headline! Then, in mid-air, all vanishes! Whirl man’s mind around about so fast under the pumping hands of publishers, exploiters, broadcasters, that the centrifuge flings off all unnecessary, time-wasting thought!

He goes on:

School is shortened, discipline relaxed, philosophies, histories, languages dropped, English and spelling gradually neglected, finally almost completely ignored. Life is immediate, the job counts, pleasure lies all about after work. Why learn anything save pressing buttons, pulling switches, fitting nuts and bolts?

So how did it happen? Beatty, again, and this is key:

It didn’t come from the government down. There was no dictum, no declaration, no censorship, to start with, no! Technology, mass exploitation, and minority pressure carried the trick.

Montag’s relationship with books is at odds with the core of his profession. His life unravels when Beatty discovers Montag’s secret.

AI / Automation

At first glance, the state’s robotic killer, the Mechanical Hound, appears as the nastiest piece of technology. However, the genuinely sinister tech is the pervasive AI and algorithms directing the feeds which provide society’s ‘happiness’. This tech ultimately drives what is societally acceptable and, by extension, unacceptable behaviour that merits state-sponsored extra-judicial killing.

So, what?

The obvious question posed in the book, that of the ethics and morality of autonomous state-sanctioned killing machines, is perhaps not as interesting as some others raised. In an age of machine learning and ubiquitous media feeds generated by algorithms consuming our data and responding to our perceived ‘need’, how will people maintain independent critical thinking space? Is the growing dependency on other ‘things’ doing our thinking something to be concerned about?

Religion was 19th century Marxism’s ‘opiate of the masses’ but in Bradbury’s book the new mass opiate is continuously streamed interactive ‘entertainment’. Fahrenheit 451’s 1950s science fiction is 2018’s reality. Contemporary Australian homes routinely feature rooms resembling Bradbury’s TV parlours, streaming similar material…is our society immune to what Montag describes, or are we already on the way there?

Professionally, as a ‘5th generation’ military increasingly takes at face value ‘feeds’ algorithmically sorted for us from big data sets, and piped into our ‘Command post parlours’ on multiple wall screens, what must we remain aware of and retain as ‘human’?

War is a human endeavour, at what point does the loss of human interaction and engagement change the nature of war?

Bradbury wrote in the Afterword of one edition of the book: ‘you don’t have to burn books, do you, if the world starts to fill up with non-readers, non-learners, non-knowers?’ 

What are we doing to mitigate against this risk given our infatuation with social media, fake news and our rush to embrace AI and machine learning?

Lieutenant Colonel Mark O’Neill is an experienced Australian Army officer with operational experience in Somalia, Mozambique, Iraq and Afghanistan. He has been the Chief of Army Fellow at the Lowy Institute for International Policy, the Joint Operations LO to the DFAT and a lecturer in security and strategy at the National Security College. In 2013 he was awarded a PhD from the UNSW. He is currently posted to Army Headquarters. The opinions expressed are his alone and do not reflect the opinion of the Australian Army, the Department of Defence, or the Australian Government.