In the conversation about the future of artificial intelligence in society, universities sit at a crucial spot.
They conduct revolutionary research that could be accelerated by AI, or fall victim to its “hallucinations.” They hold mountains of personal data and intellectual property that could train AI to be smarter but must be fiercely protected. They need to prepare a generation of workers who face career prospects radically transformed by automation.
And the many productivity gains that AI promises in the professional world — such as generating drafts, summarizing texts or brainstorming fresh ideas — fall into a different category in the academic world: cheating. AI has been a prime tool for students seeking shortcuts, and professors still have little idea what to do about it.
All of that creates a fundamental tension facing every university right now: At places meant to foster human intelligence, where exactly does artificial intelligence fit in?
Some universities are approaching the answer with great caution — or even resistance.
Arizona State University, on the other hand, is going all in. Last week, ASU became the first university to ink a partnership with OpenAI and gain access to ChatGPT Enterprise, a business-grade version of the company’s paradigm-shifting AI chatbot.
ASU has a tech-forward track record that makes its embrace of AI not entirely surprising. It’s by some counts the largest university in the country, due in large part to the more than 65,000 students taking classes online. It runs virtual science labs, hosts student hackathons and, even before the OpenAI deal, was using AI in some class settings.
ASU’s chief information officer, Lev Gonick, told me that’s now poised to expand.
“The way, mechanically, we’re rolling this out is really just a call for great ideas or interesting ideas,” Gonick said on the POLITICO Tech podcast. That call elicited about 60 to 70 pitches from faculty and staff across campus this weekend alone, he said. “To be honest, we’re overwhelmed.”
Some ideas already under consideration: An AI bot that gives personalized feedback on papers in English composition, the university’s largest undergraduate course. Another that assists biology students enrolled in a virtual reality-based research lab called Dreamscape Learn. And a third that guides students through the financial aid application process and predicts when their loan will be issued.
“Here at ASU, we talked about the ‘A’ in AI as augmenting human intelligence, augmenting education, augmenting the ways in which we teach and learn,” Gonick said. “In that environment, I could see all kinds of ways in which faculty didn’t have to do a lot of the tedium that is part of the teaching and learning process, and hopefully unleashing more creative juices.”
And in terms of AI’s potential to transform higher education, Gonick says it may have a deeper impact than the arrival of the consumer internet three decades ago.
“Between now and then, I haven’t seen anything with the kind of potential that we are seeing with generative AI,” Gonick said. “As a moment in time, I see it as tectonic. I see it as shifting the underlying ways in which we have access to and synthesis of information that hopefully will guide positive educational outcomes.”
For all its enthusiasm, ASU is not immune to the risks of AI, such as vacuuming up data and intellectual property or spreading false information. The terms of its agreement with OpenAI may give more insight into those concerns than ASU’s plans for the classroom.
Gonick said the university combed through the technical framework of its agreement to ensure none of its data would be funneled back to OpenAI or used to train its algorithms. “That’s sort of the central issue, not only for a university, but you can appreciate any large organization, any organization that has as one of its most important currencies its intellectual property,” he said.
The AI will be informed by the university’s own resources to avoid misleading or fabricated results. “Inside the enterprise version, basically what we put in and what we curate is actually what the machine gives us back,” he said.
The result has been described as a “walled garden” that has more stringent safeguards than the consumer version of ChatGPT most people can access today. But even Gonick acknowledges it’s still not a perfect system — and with education technology companies flooding the market promising AI-powered software, the potential for bad actors to abuse the technology will grow.
Gonick said ASU has begun floating the idea of a certification program to validate new technologies meeting security and privacy standards, similar to how bond-rating agencies evaluate financial institutions. In his view, that process should be overseen by academia and industry — with the blessing of regulators. Gonick said he’s in talks with state and federal regulators, and he said the announcement of the OpenAI partnership has triggered a landslide of interest.
“It’s more invitations to go to Washington, D.C. than I probably received in the last year,” he said.
It’s a good day for AI bureaucracy, as the National Science Foundation launched a pilot program for more AI research infrastructure and the European Commission officially opened its Artificial Intelligence Office.
Here in the states, POLITICO’s Mohar Chatterjee reported for Pro subscribers on the launch of the National AI Research Resource, or NAIRR, which will give researchers access to advanced AI tools as mandated by the White House’s executive order on AI. The program was initially proposed in 2019 by the Stanford University Institute for Human-Centered Artificial Intelligence, and in January 2023 a federal task force put a $2.6 billion price tag on it. Heather Vaughan, director of communications for the Republicans on the House Science Committee, told Mohar they would discuss the pilot soon.
Meanwhile in Europe, the Commission’s new AI Office is charged with supervising the rollout of the European Union’s forthcoming AI Act, as POLITICO’s Gian Volpicelli reported for Pro subscribers today. It’ll both make sure companies comply with the act’s strictures and seek out opportunities to use AI in government across the bloc. Gian writes that its “main task will be ensuring that developers and companies observe the AI Act’s rules on advanced general-purpose AI models, and investigating any infringement.” Sounds good, but there’s one small detail left to be hammered out: The actual passage of the AI Act, which is still being finalized. A vote is expected by the end of February. — Derek Robertson
Most people are pretty ambivalent about AI, according to a survey of public trust by a PR firm.
This year’s Edelman Trust Barometer finds from more than 32,000 respondents in 28 countries (in an online survey with a 0.7 percent margin of error), 35 percent “reject” AI as an innovation while 30 percent “accept” it. (Those numbers almost exactly measure those for “gene-based medicine,” a technology that occupies a similarly spooky, Promethean role in the public imagination.)
Furthermore, 59 percent of global respondents said they did not trust government to regulate “emerging innovations” including AI responsibly, with U.S. respondents outstripping that average at 63 percent.
The poll also found a partisan split over innovation that resembles results from a poll by the AI Policy Institute shared exclusively with DFD yesterday. Edelman found that 53 percent of those on the right in the U.S. “rejected innovation” across areas including green energy, AI, and gene-based medicine, while only 12 percent of those on the left did. — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]); Daniella Cheslow ([email protected]); and Christine Mui ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.