Anyone following the rhetoric around artificial intelligence in recent years has heard one version or another of theclaim that AI is inevitable. Common themes are that AI is already here, it is indispensable, and people who are bearish on it harm themselves.
In the business world, AI advocates tell companies and workers thatthey will fallbehindif they fail to integrate generative AI into their operations. In the sciences, AI advocates promise thatAI will aid in curing hitherto intractable diseases.
Inhigher education, AI promoters admonish teachers that students must learn how to use AI or risk becoming uncompetitive when the time comes to find a job.
And, in national security, AIâ€s champions say that either the nation invests heavily in AI weaponry or it willbe at a disadvantage vis-à -vis the Chinese and the Russians, who are already doing so.
The argument across these different domains is essentially the same: The time for AI skepticism has come and gone. The technology will shape the future, whether you like it or not. You have the choice to learn how to use it or be left out of that future. Anyone trying to stand in the technologyâ€s way is as hopeless as the manual weavers whoresisted the mechanical looms in the early 19th century.
In the past few years, my colleagues and I atUMass Bostonâ€s Applied EthicsCenterhave been studying the ethical questions raised by the widespread adoption of AI, and I believe the inevitability argument is misleading.
History and Hindsight
In fact, this claim is the most recent version of a deterministic view of technological development. Itâ€s the belief that innovations are unstoppable once people start working on them. In other words, some genies donâ€t go back in their bottles. The best you can do is harness them for your own good purposes.
This deterministic approach to tech has a long history. Itâ€s been applied to the influence ofthe printing press, as well as to therise of automobiles and the infrastructure they require, among other developments.
(Credit: Bbeachy2001/Wikimedia Commons, CC BY)
The dominance of automobiles and the infrastructure that supports them over many decades only seems inevitable in hindsight.
But I believe that when it comes to AI, the technological determinism argument is both exaggerated and oversimplified.
AI in the Field(s)
Consider the contention that businesses canâ€t afford to stay out of the AI game. In fact, the case has yet to be made that AI is delivering significant productivity gains to the firms that use it. A report in The Economist in July 2024 suggests that so far, the technologyhas had almost no economic impact.
AIâ€s role in higher education is also still very much an open question. Though universities have, in the past two years,investedheavilyin AI-related initiatives, evidence suggests they may have jumped the gun.
The technology can serve as an interesting pedagogical tool. For example, creating aPlatochatbotthat lets students have a text conversation with a bot posing as Plato is a cool gimmick.
But AI is already starting to displace some of the best tools teachers have for assessment and for developing critical thinking, such as writing assignments. The college essay isgoing the way of thedinosaursas more teachers give up on the ability to tell whether their students are writing their papers themselves. Whatâ€s the cost-benefit argument for giving up on writing, an important and useful traditional skill?
In the sciences and in medicine, the use of AI seems promising. Its role inunderstanding the structure of proteins, for example, will likely be significant for curing diseases. The technology isalso transforming medicalimagingand has been helpful in accelerating the drug discovery process.
But the excitement can become exaggerated. AI-based predictions about which cases of COVID-19 would become severehave roundly failed, and doctorsrely excessively on the technologyâ€s diagnostic ability, often against their own better clinical judgment. And so, even in this area, where the potential is great, AIâ€s ultimate impact is unclear.
In retrospect, using AI to help diagnose COVID-19 patients was problematic.
In national security, the argument for investing in AI development is compelling. Since the stakes can be high, the argument that if the Chinese and the Russians are developing AI-driven autonomous weapons, the United Statescanâ€t afford to fall behind, has real purchase.
But a complete surrender to this form of reasoning, though tempting, is likely to lead the U.S. tooverlook the disproportionate impact of these systems on nations that are too poor to participate in the AI arms race. The major powers could deploy the technology in conflicts in these nations. And, just as significantly, this argument de-emphasizes the possibility of collaborating with adversaries on limiting military AI systems, favoring arms race over arms control.
One Step at a Time
Surveying the potential significance and risks of AI in these different domains merits some skepticism about the technology. I believe that AI should be adopted piecemeal and with a nuanced approach rather than subject to sweeping claims of inevitability. In developing this careful take, there are two things to keep in mind:
First, companies and entrepreneurs working on artificial intelligence have an obvious interest in the technology being perceived as inevitable and necessary since they make a living from its adoption. Itâ€s important to pay attention to who is making claims of inevitability and why.
Second, itâ€s worth taking a lesson from recent history. Over the past 15 years, smartphones and the social media apps that run on them came to be seen as a fact of life – a technologyas transformative as it is inevitable. Then,data startedemergingabout the mental health harms they cause teens, especially young girls. School districts across the United Statesstarted to banphonesto protect the attention spans and mental health of their students. Some people havereverted to using flipphonesas a quality of life change to avoid smartphones.
After a long experiment with the mental health of kids, facilitated by claims of technological determinism, Americans changed course. What seemed fixed turned out to be alterable. There is still time to avoid repeating the same mistake with artificial intelligence, which potentially could have larger consequences for society.
Nir Eisikovits is a Professor of Philosophy and Director at the Applied Ethics Center at UMass Boston. This article is republished fromThe Conversation under aCreative Commons license. Read theoriginal article.