Reviews

Artificial Unintelligence: How Computers Misunderstand the World

Author _ Meredith Broussard

Publisher _ MIT Press, 2018. 248 p. Cloth. $24.95. ISBN: 978-0-262-03800-3.

Reviewer _ Clem Guthro, Independent Librarian.

Broussard, an assistant professor at New York University’s Arthur L Carter Journalism Institute, has written an accessible book on Artificial Intelligence’s (AI) grip on people’s imagination. In twelve short chapters, she lays out a cautionary narrative on the limits of AI and technology in general. Her book joins several other recent volumes that attempt to show the limits of AI and the ethical implications of wholesale and blind adoption of AI to solve the world’s problems. These include M. Tegmark. Life 3.0: Being Human in the Age of Artificial Intelligence, 2018; J. Aoun. Robot-Proof: Higher Education in the Age of Artificial Intelligence, 2017; M. Boden. Artificial Intelligence: A Very Short Introduction, 2018; and H. Collins. Artifictional Intelligence: Against Humanity’s Surrender to Computers, 2018.

Computers grew up in the idealism of the 1960’s counterculture, creating an idealism that an online world would be better, more just, and equitable. Broussard argues that the promises of technology are out of sync with what technology can achieve. Her caution is not Ludditism but a recognition that all computing is math-based and there are limits to what math can do. We have fallen for technochauvism, a belief that technology is always the best solution. Technochauvinism incorporates technolibertarian values including the idea that computers are more objective and unbiased because they reduce everything to a mathematical certainty. Using technology alone to solve social problems, we reproduce many of the discriminatory and inequitable outcomes we currently face.

Broussard covers the basics of computer programming to show that computers are not sentient. All data, including “computer generated” data, is socially constructed. People wrote the programs to collect data in very particular ways. She insists that computers are not like brains. If a piece of brain is removed, the brain reroutes the neural pathways to compensate while a computer does not work if a piece is removed. Computers operate on mathematical logic; they do well at calculating but not at complex tasks with social or ethical consequences.

Broussard notes that we long for the imaginary AI of Hollywood, which is computationally impossible. She distinguishes between the General AI of sentient robots and conscious machines and the Narrow AI which is a mathematical tool for prediction. Using AlphaGo as an example, she shows how the millions of hours of human labor created the training data and the algorithms that make the program capable of beating humans. The program does not think. It uses algorithms and training data to predict the best moves. She cautions the reader to keep the ideas of General AI and Narrow AI and the limitations in mind as they read.

We have become a data-rich society. The abundance of data sources can be used to tell stories, show relationships, and make predictions. Broussard notes that technochauvinism blinds designers and programmers from seeing their algorithms as biased. This blindness is linked to believing that a computer makes a better and fairer decision than a human. Using examples of the work she has done as a data scientist, Broussard shows how injustice and inequality are embedded into today’s computational systems and urges her readers to challenge false claims of impartiality and fairness around technology.

In the second section, Broussard addresses issues that are raised when computers do not work or do not fully address the problems at hand. Using the public school system in Philadelphia as an example, she explores why a technological solution will not work for improving standardized test scores because it addresses the wrong question. Engineering solutions, which are mathematical solutions, work well with well-defined parameters, which schools are not. She pushes back on technochauvinistic solutions which overlook the limits of school budgets and rampant poverty in some parts of the system.

Broussard contends that Marvin Minsky, the father of AI, and a small group of elite men had an outsized influence on the development of digital technology. Conditioned by the communalism of the 1960’s and the technolibertarianism of Steve Brand and Peter Thiel, where ultra-free speech and radical individuality are more important than government or social good, they imagined and misunderstood the connection between social issues and technology in ways that resulted in simplistic and dysfunctional thinking. Their disregard for women and the conventional rules of society in favor of creating new technologies shows how deeply white male bias is embedded in technology. Broussard wants readers to appreciate innovation but not to take insane ideas seriously. She cautions against adopting a computational system designed by people who don’t understand or care for the cultural systems in which the technology operates.

Machine Learning (ML) implies the computer is sentient, but in reality, ML means the computer can improve on the routines it has been programmed to perform, not that the machine acquires knowledge to act independently at tasks for which it was not programmed. ML depends on training data—large datasets that are used to “train” the machine regarding a specific machine-learning model. Broussard notes the importance of remembering that the AI and ML algorithms are created by humans, and they ignore or take into account particular contexts or biases. She urges readers to remember that ML is mathematically based and unless social factors are included and coded in a way that the computer can calculate, they are ignored. A data-driven approach will ignore many things that matter to humans. Machines are not learning and “human judgment, reinforcement and interpretations is always necessary.”

Using “self-driving” cars as an example of the complexity that is being approached in a naïve and simplistic way, she notes that this is due to misunderstanding what AI is capable of achieving. Algorithmic approaches of “good enough” do not work for actual, life-threatening situations like driving. Sensors and cameras cannot accommodate snow, bad weather, or weirdness that is often encountered by human drivers. Each state is setting standards for self-driving cars which would require additional programming. Broussard is most concerned with the ethical issues that surround completely self-driving cars (no steering wheel and no brakes). When a car malfunctions or skids, how does the car respond? Are the driver and passengers prioritized over people who may be standing on the street corner? Any response is built into the car by a programmer who made a decision. In Broussard’s view, the self-driving car is overreaching because it does not serve people well. Technologists should focus on making “human assist” systems and not human replacement systems.

Broussard argues passionately against programming which equates popular with good. This false equation builds in bias that quickly distorts and disenfranchises parts of the population. We must be critical of algorithms because they have the bias of the developer built into them and we must work towards systems design that can work towards equality. She believes that willful blindness by some creators leads to a need for a more inclusive technology. The result, she hopes, is for understanding the need to investigate what our technical choices mean.

In the third section, Broussard switches her focus to how technology and humans must work together. She uses the example of a hackathon to explore how technology develops and potentially disrupts society or industry. Her experience on the Startup Bus hackathon showed how technology may or may not develop and the significant work that it takes. Broussard proposes a way forward—namely a collaboration between humans and machines. Machines will be able to handle a lot of mundane work but not the unusual or out of range cases which require human intervention. This approach and these type of systems are called “human in the loop systems.”

She notes several new organizations (AI Now Institute, and Data and Society) which are pressing for responsible and open computing. Within the AI community, there is nascent understanding that algorithms have been discriminatory and there is a movement to address this. Broussard is raising awareness that technologists and programmers have disciplinary priorities that guide their decisions which at times have obscured the humans that technology is supposed to serve. She concludes that humans are the main point of technology, and the needs of all people, not just a subset, should benefit from the technology that is being developed.

This book is appropriate for the general public, computer science students, librarians, information professionals, and policymakers concerned with the increased presence of Artificial Intelligence in everyday life. Anyone intrigued with ethical implications of Artificial Intelligence or Machine Learning will find this book informative and useful. It could also be used in library and information science programs for courses on Artificial Intelligence.