Just months after setting ChatGPT loose into the world, artificial intelligence firm OpenAI has upped the ante with GPT-4. Earlier this week, the company modestly introduced the new language model, declaring it ready to take text and image inputs and spit out responses, albeit ones “less capable than humans in many real-world scenarios”.
OpenAI is under-egging GPT-4’s capabilities a bit — which says something about how confident it is in its abilities. GPT-4 can ace standardised tests like the BAR, LSAT and a plethora of college course exams. It can process images, recognise them and provide context. It reportedly makes fewer mistakes and “hallucinations” and is better at refusing attempts to get it to do things it’s not programmed to.
GPT-4’s capabilities are both gobsmacking and perhaps a little bit less impressive than they first seem. That a piece of technology accessible to anyone can instantaneously spit out a new, university-level essay or code for a website based on a drawn sketch shouldn’t be understated. The model’s ability to do things we take for granted or view as an amusement (like getting GPT-4 to explain why a picture of chicken nuggets arranged in the shape of the world’s map is funny) will be life-changing for some — for example, those with visual impairments.
An antidote to some of the hype and resulting fearmongering comes from remembering why GPT-4 is able to do some very impressive things. OpenAI’s GPT-4 is built off a deep-learning large language model: an enormous database of language that it analysed for patterns and now uses to predict what answer you want based on your question.
Compared to humans, GPT-4 and other AI models can do things well because they work in a fundamentally different way from our brains. We’re not constantly blown away by our phones remembering a lot of phone numbers or instantaneously doing enormous sums, and we shouldn’t be impressed GPT-4 can pass the bar exam — the model has an enormous amount of bar exam answers available to it that it can crib off when replying. (Probably, we don’t know for sure because, for the first time, OpenAI isn’t telling us what its model is trained on.) Its abilities are still impressive, but not as impressive as humans, with all their flaws and limitations, acing the same exam. So don’t worry about AI replacing lawyers anytime soon, despite the best efforts of some.
An interesting, under-covered part of GPT-4’s announcement was OpenAI’s admission that it’s now using the model to help with evaluation and iteration of the model’s own development. This sounds a lot like the singularity hypothesis, the theory that tech will one day get so advanced that it will start improving itself faster than humans can and become a runaway train of technological advancements. In case it wasn’t clear, this scenario isn’t a great outcome for us meatsacks.
In reality, OpenAI’s use of GPT-4 to help improve itself isn’t a doomsday scenario, but it is an example of how those best acquainted with the technology see its value. GPT-4 can’t replace people entirely. Its errors are too common and, while easily spotted by any individual with basic expertise, usually opaque to itself. Instead, it’s becoming an impressive assistant for experts who can coax it to supercharge their abilities. The runaway train of technological advancement still needs a human driver — at least for now.
I told my car to take me to work, but it took me to the pub instead. I tried to argue, but it told me I’m stressed and need a break. I told my phone to talk some sense into it, but it agreed with the car. Turned out they both had dates at the pub.
The car was dating the boss’s tesla, and the phone was anybody’s with an i. So I bought the boss a scotch while I had a lemon squash, and we took a taxi to the office, leaving them to it. The animals. I found out later they were doing simultaneous up and down-loads and blew the pub’s quota for a month, got arrested by the IT cops and thrown in the faraday cage to cool down. My wristwatch lawyerbot told me to bail them out and pay the fine. I said No way. They got themselves into it, let them do time. In a way they did me a favour, now I’m buddies with the boss, got promoted and I’m shacked up with his daughter. The laugh’s on them, the schmucks.
Sounds like Roy Orbison’s “Working for the Man“.
You better make sure that, with more & more household items connected to the ‘internet of things’, your fridge & oven don’t enter into a conspiracy to ensure that you eat more healthily.
It’s going to make the ‘thoughts and prayers’ meme an auto-response, which should make savings in time and prayerful thoughts, after just another one of those mass slaughter days. That’s productivity for you.
Apparently during testing “to ensure that GPT-4 couldn’t take over the world”, it did hire a human to solve a captcha, and lied to convince the human to do the work.
Good to know that it can’t solve captcha’s I suppose.
Um, you just described how it solved a captcha
It is possible to regard these technological advancements as part of a continuum which started with the invention of writing thousands of years ago, the development of maths, the invention of the printing press, the establishment of libraries, the internet etc and now AI.
As impressive as all these developments have been, it is worth remembering that the human mind is the result of millions of years of natural selection and is by a huge margin the most complex thing in the known universe.
Things like adding up large numbers quickly and passing the bar exam are not what defines the essence of the human mind. AI’s ability to perform these tasks just opens up the possibility for greater human creativity just like all the preceding technological advancements have done.
There will be winners and losers and unintended consequences along the way – but there’s going to be no stopping it.
Collapse of the biosphere will stop it
Unless shot into space for eternal orbit the collapse..
*Before the collapse
These generative pre-trained transformers are becoming a problem to themselves if the pre-training is scaffolded to function more complex GPT’S. The first problem is there is no authentic anchor to a sense of place which may result in AI anomie. The other problem is related to organisational citizenship behaviour which would entail a zombie citizen whom mimics organisational behaviours.