An artificial intelligence (AI) chatbot has gone viral online, with the bot able to write entire songs, essays and poems on demand.
ChatGPT was released to the public by OpenAI just over a week ago.
Within days, the program has garnered more than a million active users, who have pushed the program to the limit.
Internet users quickly harnessed the AI’s capabilities to create plenty of memes, as well as commanding it to do their dirty work, like writing up their college essays.
The AI’s responses are also stunningly human-like, with it very much sounding like there is another person on the end of the line.
But with ChatGPT appearing powerful and sophisticated, many wordsmiths have begun to speculate about whether the rise of AI could soon put them out of a job.
How it’s done
ChatGPT is a large language model that draws on text data from all around the internet.
It trawls all corners of the internet, from recipes to song lyrics, and research papers to Wikipedia.
The AI is even able to pick up on the nuances in social media posts and discussion forums – even learning from YouTube comments.
It then spits out a chunk of text based on the user’s request, entered into the chatbot.
It’s very similar to the functionality of DALL-E and DALL-E 2, an image-based AI program that went viral earlier this year, which was also developed by OpenAI.
ChatGRP is essentially playing a complex game of ‘guess the next word’, doctor of computer science at Queensland University of Technology, Aaron Snoswell, said.
“It looks at lots and lots of text data that has been collected from the internet, generally. And they give it part of a sentence and ask it to fill in the blank and say, ‘what’s the next word?’” Dr Snoswell told The New Daily.
“And [it] does that billions of times and in the process learns about the structure of human language, and learns how to respond to sentences and texts in a way that seems natural.”
Hottest meme machine
When it comes to making a request for ChatGPT, generally, the sky is the limit.
You can ask simple questions, like ‘What colour is the sky?’, or ask it to solve a maths problem.
Creative internet users have tried just about anything, from ‘Write me a college essay about memes’ to ‘Write me a haiku about Donald Trump’.
You can also ask ChatGPT to provide answers in the style of texts or influential figures.
For example, ‘Write me a story about dolphins in the style of Shakespeare’ or ‘Write me a bible verse asking a spouse to stop snoring’.
ChatGPT will usually go along with whatever you request.
It does, however, bump back a request if the questions get too personal, using a response along the lines of:
“I am a computer-based program and do not have the ability to feel emotions or physical sensations. My purpose is to provide information and answer questions to the best of my ability based on the information and training I have received.”
It also has the ability to refer back to earlier parts of conversations, occasionally dropping an “as mentioned before”.
You can experiment with the ChatGPT chatbot here.
More fun than threat
While plenty of people have been having fun with ChatGPT, some have begun to worry about the technology’s potential.
OpenAI co-founder Elon Musk tweeted that ChatGPT is “scary good” and that we are “not far from dangerously strong AI”.
But Dr Snoswell said these programs will likely not replace creative minds – rather serve as supplementary tools.
“I’m quite optimistic about these tools, as being tools,” he said.
“So for sure, they will change the way that … creating creative content works, and the way that programming works and the way that these jobs sort of function.
“But I don’t see them putting people out of jobs, necessarily, just changing the way that those sort of jobs look.”
Dr Snoswell, who is part of Queensland University of Technology’s Automated Decision Making and Society Centre, says he has observed people online successfully using ChatGPT as an academic tool.
ChatGPT has been used to summarise documents in simple language, and to translate text into different languages.
But he also acknowledged that the technology has its limitations.
While ChatGPT is able to provide convincing passages of text, its responses are often “not actually anchored in ground truth”.
“The downside is, it has the potential to also make things up. Sometimes it’s called hallucinating information. And so it will spit out an answer for you. It doesn’t really know how to say, ‘I don’t know’,” Dr Snoswell said.
“It will often come up with an answer that sounds very authoritative and correct, but may not actually be 100 per cent correct, or might have details that aren’t quite right.
“Because the way this technology works [is it] learns to mimic the structure in the field of human language, but it’s not actually anchored in ground truth.”
Dr Snoswell specialises in research on ethics and accountability issues around large language models.
He said there are other “big risks” associated with reliance on AI programs, including the dissemination of misinformation.
“This sort of a tool can make it very easy to generate content that sounds like it’s correct, but it’s not. And that could be used to … rapidly produce lots of content that could mislead people,” he said.
“The technology is very exciting. And it’s a very cool time to be working out of intelligence. But it’s also important to have safety systems in place.”