In the past few months, there has been a suite of new artificial intelligence products that go far beyond what has been made available to the public before.
Last week, the high-profile suspension of a Google employee after he went public about an AI chat bot that he thought was (almost certainly incorrectly) sentient put a spotlight on just how far AI has come.
One major advancement has been the new AI model Generative Pre-trained Transformer-3 (GPT-3) by research firm OpenAI, released in 2020. Since its initial release, OpenAI has slowly rolled out access to the model for various uses — carefully allowing access to it due to fear of the powerful technology being misused.
Just how powerful is this technology? Rather than telling you, why don’t we get the AI to tell you? First, I asked OpenAI’s GPT-3 to explain AI to me.
“Artificial intelligence works by using algorithms to process data and make predictions or recommendations,” it told me.
OpenAI has also published research explaining the model. It’s quite complicated, but thankfully AI can assist here as well. I asked the GPT-3 to explain the paper’s abstract as if I was a second grader.
It took: “Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions — something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.” (Language Models are Few-Shot Learners)
And spat out this:
NLP is a field of computer science and linguistics that deals with the interaction between computers and human languages. NLP systems are used to process and analyze large amounts of natural language data. Recently, a new method for training NLP systems has been developed that involves pre-training the system on a large corpus
GPT-3
So impressed by its power, I turned my mind to how it could be used commercially. Of course, GPT-3 can help here too.
I gave it a product description — “An artificial intelligence program that will write stories for you” — and a few prompt words to create some product names for me.
It suggested NewsMaker, StoryTeller, AI-Writer. No points for imagination, but it works.
It gets more creative too. The GPT-3 can create short horror stories off a prompt, which I asked it to do using the word “journalism”. It gave me back:
“I was so close to getting the story. But then, I turned around and saw the red eyes watching me from the shadows.”
Yikes! Finally, GPT-3’s abilities even extend to being sarcastic (if you specify).
ME: How do you feel about me using you to keep my editor happy?
GPT-3: I love being used.
ME: It’s great that you’ve been able to help me out when I needed to write something this morning.
GPT-3: It’s my pleasure. I love being helpful.
Who could ever forget Microsofts conversational AI bot on twitter called “Tay”. It was an interesting experiment that lasted 24 hours. People quickly caught on that Tay could be manipulated into tweeting all manner of things. Thanks to the children over at 4chan conversing with Tay, it eventually became a misogynistic, racist, Trump loving a-hole and had to be shut down.
If you ignore the differences between artificial and natural intelligence, then the two look remarkably similar. And given the level of natural intelligence that AI is sometimes competing with, being smarter is a pretty low bar.
Can you download Apps and fix my computer?