On February 26, two days after war broke out in Europe for the first time in decades, Father Paolo Benanti walked briskly through the centre of Rome dressed in a hooded robe. He set off from his home overlooking the ancient Forum of Augustus, dodging city buses, cyclists and street musicians, crossed the millennia-old Ponte Sant’Angelo, and finally made his way down the Road of Conciliation, the arterial route to the Vatican. His destination was the Apostolic Palace — the Pope’s official residence — where he had a rather important meeting to attend.
Benanti is a Franciscan monk who lives in a spartan monastery he shares with four other friars perched above a tiny Roman church. The Franciscans take vows and live in communities, but they are not typical priests. They have day jobs teaching or doing charity or social work, emulating the life of Francis of Assisi, their founding saint. Benanti’s monastery is a house of learning: all the friars, the eldest of whom is 100, are either current or former professors, and their areas of expertise span chemistry, philosophy, technology and music.
Benanti, the youngest of them at 48, is an engineer and an ethicist, mantles he wears comfortably over his priest’s robes. As an ethics professor at the Pontifical Gregorian University, a nearly-500-year-old institution 10 minutes’ walk from the monastery, he instructs graduate theologians and priests in the moral and ethical issues surrounding cutting-edge technology such as bioaugmentation, neuroethics and artificial intelligence (AI).
Benanti was on his way to see Pope Francis, the Argentine-born pontiff whom he likens to a passionate tango dance in contrast, he says, to his predecessor’s staid waltz. At the meeting, Benanti was to act as a translator of both languages and disciplines. He is fluent in English, Italian, technology, ethics and religion.
The Pope’s guest was Brad Smith, president of the US technology giant Microsoft, who had arrived the previous day in a private jet. On the agenda was the topic of AI and, specifically, how humanity as a whole could benefit from this powerful technology, rather than being at its mercy. The meeting was timely: the Pope was concerned about how AI might be used to wage war in Ukraine. Not to mention, what he could do to prevent the technology from ultimately destroying the fabric of humanity.
Over the past three years, Benanti has become the AI whisperer to the highest echelon of the Holy See. The monk, who completed part of his PhD in the ethics of human enhancement technologies at Georgetown University in the US, briefs the 85-year-old Pope and senior counsellors on the potential applications of AI, which he describes as a general-purpose technology “like steel or electrical power”, and how it will change the way in which we all live. He also plays the role of matchmaker between what Stephen Jay Gould famously described as the non-overlapping magisteria, leaders of faith on the one hand and technology on the other.
He has held meetings with IBM’s vice-president John Kelly, Mustafa Suleyman, co-founder of Alphabet-owned AI company DeepMind, and Norberto Andrade, who heads AI ethics policy at Facebook, to facilitate an exchange of ideas on what is considered “ethical” in the design and deployment of the emerging technology.
Benanti, despite his jovial and optimistic affect, has also been instrumental in advising the Pope and his council on AI’s potential dangers. “AI has the power to produce another technological revolution . . . [and] it can usurp not only the power of workers, but also the decision-making power of human beings,” he says, as he shows me around the Church of Saints Quiricus and Julietta, his home. “It could be unjust and dangerous for social peace and social good.”
The Church’s leaders are particularly concerned with the idea that AI could increase inequality. “Algorithms make us quantifiable. The idea that if we transform human beings into data, they can be processed or discarded, that is something that really touches the sensibility of the Pope,” Benanti tells me. “If you look at what happened to children and the elderly in the first industrial revolution, they were either overused or excluded by the changes in society. The way AI could reshape the way in which wealth and power is distributed could be really unmerciful for the most fragile ones.”
Smith’s February audience with the Pope wasn’t the Microsoft executive’s first. In February 2019, Benanti brokered their first meeting as part of a cross-disciplinary council to debate the ethics of artificial intelligence. After bonding over their support for undocumented migrants and refugees, the two delegations agreed to work together on something more ambitious and tangible: a pledge of common human values that would act as a guide for the designers of artificial intelligence.
Benanti helped draft an ethical commitment, titled the Rome Call, that was signed by Microsoft, IBM, the Italian government and others in February 2020. At its heart was an imperative to protect human dignity above any technological benefits. “The first meeting produced the trust that led to the Rome Call: everything starts from human-to-human relationships,” Benanti explains.
This is not the first time the Vatican has interceded on matters of technological development. Nuclear weapons have been a core part of its foreign policy agenda since the cold war, and more recently it issued strong rhetoric against biotechnology such as human cloning. But the Church’s view on AI has been more considered and inclusive, drawing on expertise from a range of institutions including other religions in an attempt to collectively check the power of private companies.
This particular intervention also sparked a longer discussion within the Holy See about how to create a more diverse, global alliance to hold AI companies to account. The two-year debate culminated in plans for a historic event that is to take place in Abu Dhabi this May: the signing of a multireligious ethical charter to protect human society from AI harms, signed by leaders from all three Abrahamic religions, Christianity, Islam and Judaism.
“They see the same issues here, and we want to find a new way together,” Benanti says, as we drink espressos in the dining room of the monastery. “To my knowledge, these three monotheistic faiths have never come together and signed a joint declaration on anything before.”
The pursuit of building mechanical objects to approximate human intelligence is nothing new. Jewish folklore tells of the golem, an inanimate humanoid in 16th-century Prague, imbued with life by Rabbi Loew to protect local Jews from anti-Semitic attacks. The story’s consequences are predictable: the golem runs amok and is ultimately undone by its creator. Ditto with Mary Shelley’s Frankenstein, the tale that helped birth the science-fiction genre, which has grown ever more preoccupied with depictions of rogue AI.
Today, real-world AI is less autonomous and more assistive. Since about 2009, a boom in technical advancements has been fuelled by the voluminous data generated from our intensive use of connected devices and the internet, as well as the growing power of silicon chips. In particular, this has led to the rise of a subtype of AI known as machine learning, a method of teaching computer software to spot correlations in enormous data sets. These algorithmic systems are standing in for human judgment in medicine, recruitment, loan approvals, education and prison sentencing.
It has also become clear that algorithmic decisions can be fraught with errors. Inaccurate inputs can cause computers to propagate existing biases. For instance, research found that an AI system helping treat up to 70mn Americans in US hospitals by allocating extra medical support for patients with chronic illnesses, was prioritising healthier white patients over sicker black patients.
The researchers calculated that nearly 47 per cent of black patients should have been referred for extra care, but the algorithmic bias meant that only 17 per cent were. Data, Benanti explains, is a map — not a copy — of reality. Therefore, “it is unthinkable . . . that an AI machine can make error-free choices. [Machines] shall always and integrally be fallible.”
Benanti’s advice to the Holy See has been straightforward: the Church can help create a new system of “algor-ethics”, a basic framework of human values to be agreed upon by multiple countries, religions, non-profits and companies around the world, and also understood and implemented by machines themselves. At its core, he says, algor-ethics would require all autonomous systems to doubt themselves, to experience ethical uncertainty. “Every time the machine does not know whether it is safely protecting human values, then it should require man to step in,” he says. Only then can technologists produce an AI that puts human welfare at its centre.
Benanti’s role — trying to agree on a joint set of values between the three religions — is delicate. Each faith has its own moral red lines and interpretations of harm, so consensus will require a balancing act. Perhaps more importantly, AI used in most of the world is designed by California-based engineers who bake their own perspectives into the code.
“I would call Silicon Valley’s ethos almost libertarian and very strongly atheist, but usually replacing that atheism with a religion of their own, usually transhumanism or posthumanism,” says Kanta Dihal, a researcher of science and society at the University of Cambridge. “[It’s] a ‘man becoming god’ kind of narrative, which is strongly influenced by a privileged white male perspective shaping what the future might look like.”
Dihal’s analysis is reminiscent of a fragment from Pope John Paul II’s famous writing on the dialogue between faith and science from 1998, Fides Et Ratio, which Benanti teaches in his graduate class. I discovered it when one of his students mentioned it to me after we attended a morning lecture on neuroethics.
In the thesis, John Paul II writes, “Certain scientists, lacking any ethical point of reference, are in danger of putting at the centre of their concerns something other than the human person and the entirety of the person’s life.” He adds: “Further still, some of these, sensing the opportunities of technological progress, seem to succumb not only to a market-based logic, but also to the temptation of a quasi-divine power over nature and even over the human being.”
During his doctoral studies, Benanti was drawn to the field of AI ethics. Like AI itself, this new area was born out of the technology industry and strongly reflects its sensibilities and ideals. He felt there was an urgent need for a more “humanistic” approach, a perspective brought by thinkers and experts on history, society and morality. As AI systems began to be deployed in healthcare, law enforcement and government, it has led to a proliferation of published ethical principles from “nearly every country, every NGO, every technology company”, Dihal says.
A 2019 review of these principles by researchers at the Swiss Federal Institute of Technology in Switzerland uncovered at least 84 distinct documents containing ethical guidelines for AI, nearly 90 per cent of which were released after 2016. The majority were authored by the technology industry and western governments, which the authors concluded “raises concerns about neglecting local knowledge, cultural pluralism and the demands of global fairness”.
The review’s authors found that they all diverged conceptually, revealing disagreements over which ethical principles should be prioritised and how conflicts between ethical principles should be resolved. As it stands, they wrote, these efforts “may undermine attempts to develop a global agenda for ethical AI”. This effort, which Dihal calls the “third wave of AI ethics”, is Benanti’s focus. The labour is to articulate values, common to all human beings, that we can teach our machines to honour.
A balding Roman with dark-rimmed glasses and a penchant for jokes, Benanti was born in Trastevere, a neighbourhood on the west bank of the River Tiber, just south of the Vatican. “I’m the son of pop culture. I grew up with Star Wars and Blade Runner. Sci-fi is the entertainment of my generation,” he says, with a grin. Like many children of his generation, he was also fascinated by computers. He learnt to code languages such as Basic, Pascal and C, and tinkered with the innards of his grandfather’s Olivetti M24, whose green DOS-prompt glow he remembers fondly.
Engineering was a natural choice for Benanti. It was his father’s profession, and he had always been curious about how the world worked. So he completed his degree in mechanical engineering, and spent a few years pondering what to do with it. When his fiancée introduced him to the Franciscans, encouraging him to spend some time in their local church, he started going along with her. The experience was so magnetic that he eventually decided to leave her and join the fraternity.
“When I first went to live as a Franciscan at 25 years old, I felt deeply free and . . . a deep happiness because I realised I’m looking for the meaning of things, not for how things work,” he says. Studying theology and joining the order permanently gave him an understanding of the world — and a sense of meaning — that engineering only skimmed. “It’s like when you leave for a long journey. You are happy, you are curious, you have a sense of freedom as you start to walk.”
When he was small, Benanti was a Boy Scout who played football in Trastevere, coached by his parish priest, Vincenzo Paglia. Today, Paglia is an archbishop and the president of the Pontifical Academy for Life, an academic society at the Vatican that began as a pro-life institute in 1994. Since then, it has evolved into studying broader issues surrounding the extension and preservation of human life, including medical ethics and artificial intelligence. In short, Paglia is the Catholic Church’s official emissary for AI, and Benanti is his adviser.
The day after the Pope’s Microsoft meeting, which Paglia also attended, Benanti took me to see the archbishop. To walk the city with the energetic Benanti requires comfortable shoes and a lively pace. He often wears jeans and trainers underneath his robe and goes everywhere on foot with a backpack slung over his shoulder, stopping often to talk to friends, students and strangers. At one point, he takes confession from a taxi driver with whom he had just struck up a conversation.
Weaving through the crowds, Benanti explains that he is interested not just in technology’s impact on society, but also in anthropology. He was reading Harvard psychologist Joseph Henrich’s book The Weirdest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous, about the cultural quirks of “western educated, industrialised, rich and democratic” (or Weird) populations, who, Henrich argues, are an indirect product of the Catholic Church’s edicts.
“If AI technology is made by people of this one culture, it will see all of us in that way. And we will all start to behave in that way to adapt to it. Technology is the lens through which we all see the world,” Benanti tells me, as we enter the marbled, mosaic building of the Pontifical Academy for Life in the Vatican.
We’re greeted warmly by 76-year-old Archbishop Paglia in his spacious office, which has a huge painting of the crucifixion on the wall behind an antique desk. Paglia agrees with the idea that AI is becoming a monoculture. “We see great epoch-making events from different directions: nuclear power and bombs, ecological crisis and also new technologies. I strongly believe that the Church has to enter in these fields now. Not only for itself but . . . to help to make a new covenant among all human beings,” he says. “A new alliance among thinkers, among cultures, among institutions too, in order to create a new humanistic future.”
Paglia, who has fostered relationships with his counterparts in the Jewish faith, believes that technology companies can only be held accountable by multifaith collaborations. “In a global world, we religious leaders have the duty to help the dialogue . . . between different people and cultures,” he says. “All the religious traditions . . . offer a perspective on humans and human behaviour. And so we have to work with other religions to have global solidarity, not global conflict.”
When Benanti first began advising Paglia on the impacts of AI, the Vatican minister says he “felt the urgency to go deeper into . . . the frontiers at which life functions”. In 2019 and 2020, he commissioned two expert workshops, first on robo-ethics — the relationship between humans, machines and health — and later on AI, ethics and law, which he describes as an “in-depth anthropological and ethical reflection”.
Paglia argues that without these reflections, the world risks undergoing an “extremely serious” process of dehumanisation. “The technology is not just pinging or touching human beings, but it gives a new understanding to what it means to be human today.”
Those workshops, alongside the Rome Call and the upcoming multifaith Abu Dhabi meeting, prompted the Pope to establish the Renaissance Foundation, a non-profit that will further reflect upon and study new technology’s impact on human life, last April. The first two studies it has funded are in the use of AI in migration and border control and as a weapon of war.
“We know cyber wars anticipated land wars. And possibly refugee status decided by an algorithm — that is a non-human dictatorship,” Paglia says. “We would like to defend the freedom of every woman and every man, especially the weakest. And this is the reason we are afraid of the possible oligarchy of big data. It could be a new form of slavery.”
On our twilight walk back through the city, Benanti openly ponders whether he feels optimistic or pessimistic about the future of AI. “When the first human being picked up a club, was it a tool or a weapon? With AI, I see a lot of wonderful tools, but I see weapons too. Sometimes the difference, the line, is so narrow.”
The day before I’m due to leave Rome, Benanti invites me to eat lunch with the friars, a mark of Franciscan hospitality. The spread is varied, plentiful and vegetarian, allowing me to choose what I want without restriction. As we tuck into stuffed Portobello mushrooms crisped with breadcrumbs, a spring vegetable and squash frittata, and a Roman artichoke ravioli, Benanti tells me that he, along with the Vatican leadership, is already planning the next stage in the global reckoning on AI ethics: a meeting with leaders of the Eastern religions, including Hinduism, Buddhism and Shinto, in Hiroshima, a symbol of technological destruction brought by human hubris.
While his aims might seem lofty, Benanti’s personal goals are simple. “As a Franciscan, my achievement is to see that every human being is simply free to live the life that God gave to him,” he says. “I would like to see no constraints.” Benanti says he is driven purely by his curiosity to better understand the human condition.
As we round off our meal with homemade pomegranate crostata, he tells me he has been playing with GPT3, an AI model built by San Francisco-based lab OpenAI, which can produce human-like text. The system, while imperfect, is impressive. It can write news stories, translate languages, even compose poetry.
“I want to analyse its power to be creative,” Benanti says. “I am interested in how this AI can change the shape of trust, truth and knowledge.” So he fed it philosophical texts and Dante’s verses in his medieval Italian vernacular. And, in return, the AI wrote Benanti a poem about the stars, the sky and love.
Madhumita Murgia is the FT’s European technology correspondent
Follow @FTMag on Twitter to find out about our latest stories first