Skepsis #50: What's going on with AI?
The latest generation of Generative AI tools are really cool. What are its practical applications, limitations, and is it really creative?
I've been reading about AI extensively in recent weeks and months in order to wrap my head around what's going on and what the implications are for people, businesses, and the world at large. With anything that becomes this hyped and gets so much attention, I can't help but try to understand what's really going on. What is all the fuzz about AI beyond the hype?
What is Generative AI?
Unless you’ve been living under a rock for the past few months, you should have noticed the explosion of interest and hype around so-called Generative AI and tools like ChatGPT in particular. ChatGPT, based on the AI-powered large language model (LLM) GPT-3 by Open AI, is what can be most simply described as “autocomplete for everything” - a tool that makes it possible to generate text or ask questions in natural language and get extremely useful answers (for the most part!). People are using it to write emails, marketing copy, and blog posts, to cheat on exams, and even to write code.
I can get help with writing excel formulas:
If I’m stuck or need help, it can easily come up with the next part of a sentence or paragraph given enough context. Even write an entire article if needed:
In terms of its usefulness compared to prior versions of similar tools, it really is something else. The first time you take it for a spin, you’ll be genuinely impressed by the level of accuracy in its writing, how “human” it sounds, and how easy it is to talk to (or write with). The latter point is really what makes it so useful from a human point of view; it can “understand” what you’re trying to do or what you want to know and make useful suggestions 99+% of the time. I suggest you try it if you haven’t already.
In addition to text-based Generative AIs, there are also tools that specialize in creating images and even videos, like OpenAI’s DALL-E. These have been built using different models but fall within the same category of (Generative) AI.
If you want to read more about the basics of tools like ChatGPT and other Generative AI tools, I’d recommend reading this article from Thomas Pueyo:
AI is overhyped but also useful.
ChatGPT and similar LLMs are models that can produce outputs that are text-based predictions based on an existing corpus of text, predictions that sound like something a human would have written. As a result, ChatGPT’s quality of output basically converges on the average of everything that exists on the Internet. It's the "combined wisdom" of everything else that has been written. This makes it seem very accurate and smart - it is very good at synthesizing a lot of information very quickly - but isn’t really smarter than the texts it has been trained on.
Today, Generative AI based on LLMs aren't capable of logical and factual reasoning because they aren't built to work as linear machines like calculators. As we’ve seen above, they are probabilistic machines that are great at creating accurately sounding and coherent text output from a given input. This can give the appearance of a logical machine that's answering as if it was an oracle, even a sentient one because it feels and sounds like a human in its replies. But it's just a machine that's very good at guessing what words should come next, written in a way that makes sense to humans.
As Generative AIs are able to make use of their accumulated knowledge to (help humans), develop ideas, produce content, design models, and more. This allows them to produce output at scale quickly. They can remove or reduce the number of tasks that are, for the most part, menial and repetitive. And this is a good thing. This means that most creative types can see a huge boost in their productivity. As noted above, they can produce creative results of a quality matching that of the work done by humans. They can help you write a book by speeding up your brainstorming and editing processes in particular. Or create 10 versions of a new design helping you design 10 times faster.
Having said that, ChatGPT isn't an oracle that can come up with truly new things or generate novel wisdom. It's not gonna create amazing writing or genuinely new ideas - but it can help you generate those things. In that way, it’s like the perfect brainstorming tool.
You could say that AI makes us editors: we will be asking questions, prompting the AI with inputs, and reworking the output to fit what we are looking for in an iterative process. It's like a version of Word or Photoshop that does a lot of the work for you and where you can act more as an editor rather than the sole producer of content.
(Generative) AI probably won’t take all our jobs
ChatGPT isn’t going to be able to write a high-quality book for you from scratch, and DALL-E’s image-gen technology isn’t going to make art or artists useless. They won’t take all jobs from writers, programmers, designers, and artists.
(Well, at least not just yet.)
GPT-3 and even near-future iterations of similar models are probably not going to overturn the world and put billions of people out of work in the next decade but is the first version of an LLM that really "works" and is genuinely useful.
The thing to keep in mind is that new technology, in most cases, doesn’t replace people’s jobs completely; rather, it replaces or automates some of the tasks they do. And this is largely productivity-enhancing rather than job-destroying.
If AI causes mass unemployment among the general populace, it will be the first time in history that any technology has ever done that. Industrial machinery, computer-controlled machine tools, software applications, and industrial robots all caused panics about human obsolescence, and nothing of the kind ever came to pass; pretty much everyone who wants a job still has a job.
[…] instead of replacing people entirely, those technologies simply replaced some of the tasks they did. If, like Noah’s ancestors, you were a metalworker in the 1700s, a large part of your job consisted of using hand tools to manually bash metal into specific shapes. Two centuries later, after the advent of machine tools, metalworkers spent much of their time directing machines to do the bashing. It’s a different kind of work, but you can bash a lot more metal with a machine.
Generative AI is unlikely to revolutionize the world because the things it can revolutionize are a rather small part of the overall world economy. It comes back to the underlying problem with the great stagnation - that we haven't seen a lot of real progress in the world of atoms, only progress in the world of bits. AI doesn't change that. At least not until it's able to fundamentally transform our ability to generate or synthesize new ideas and help us solve novel problems. Like how to construct a working and efficient Nuclear Fusion power plant that can scale to provide energy for large parts of the world. Or how to create new medicines. Yet even then, as Eli Dourado recently wrote, we're missing the true innovations to make this happen; innovations in regulation, policy, and social/cultural movements that could make these changes truly possible.
Is AI truly creative?
Being able to create never-before-seen high-quality texts, images, and even videos, can you say that AIs are creative in the same way that humans are creative? What does it even mean to be creative?
When an AI of the sort discussed above creates a new image from a given prompt by a human, it does so based on having learned from millions or billions of other images and then produces the image most likely to correspond to the prompted description. You can fairly say it’s something new that hasn’t existed before, but it’s also not new in the sense that it’s just generated from a bunch of other pictures that are (at least so far) made by humans. Can you say that the AI has committed copyright infringement? (Some people are saying that.) Can you say that the human who prompted the AI to create a piece of art is an artist in the same way that the human who created an image with a paintbrush is an artist (they both use tools)?
New ideas and creativity don’t just appear out of nothing. Human ideas are a mishmash of other’s ideas, personal experiences, psychology, etc., that, taken together, generate a thought in the human’s mind that they deem to be novel because no one else has ever thought of it before.
In a sense, an AI that has learned to create art by having learned from billions of other examples is not so different from a human that, through millions of years of evolution and years of personal experience, produces art as a result all that prior input. At bottom, both involve having access to large amounts of data that, in the case of humans, we have just internalized. It feels natural even though we can’t explain in any coherent way how new ideas arrive in our minds.
The counter-intuitive result of this explosion in creative output is that AI is threatening human creative occupations. We used to think that when AI, together with robots, got really good, it would come for the low-skilled blue-collar jobs through increasing automation. Ironically, it is the creative jobs like designers, writers, and programmers that are now impacted most by this kind of AI. We thought that only humans can write good and interesting stories or create art. But as it turns out, this is where (Generative) AI seems to excel.
AI going off the rails
In the middle of February, Microsoft (which is a major shareholder of OpenAI) launched its own version of GPT-3, integrated into the Bing search engine - yes, that Bing. The search engine has been augmented not just to provide you with a list of results but also to answer questions for you directly. People have been saying that this is going to mean the end of Google’s hegemony in the search Space. It definitely poses a challenge, but the early beta launch of Bing’s new AI has been all but smooth.
The AI (called Sydney) has lied to users, claimed to be in love with a reporter, has argued with another reporter that intended to publish their conversation without Sydney's consent, given inaccurate information on multiple accounts, and seems to take offense when it has been proven wrong and has even started to intimidate or scare people to the point that there is now a petition from the AI safety community to “stop AI before it’s too late”.
People have also been able to push ChatGPT to its limits (and beyond), showing it has some serious weaknesses - you can put it into "DAN mode" (Do Anything Now), and it can be made to encourage you to do sketchy stuff like commit crimes.
Although Microsoft has toned down Bing Chat and most likely fixed a lot of the issues that gave rise to this weird behavior. What does these sort of behaviors tell us about the future of AI and chatbots?
We are not ready for the future of Chatbots. Even if Bing isn't Sydney anymore, there is no doubt other AI bots will come along, and may already be deployed (I assume governments have, or soon will have, LLMs at the level of Bing but with less guardrails). People will absolutely be fooled by AIs into thinking they are talking to other people. They already fell for Bing’s illusion of sentience. Can people be manipulated by AIs? Can AIs run sophisticated scams? I think we are about to find out. And we need to consider what that means.
Nevertheless should remember that as useful as ChatGPT, and now also Bing’s chatbot is, they are tools, not genies. They have no real sense of morality, no “knowledge”. They simulate these things in such a good way due to their ability to process natural language that for a human talking to it, it really feels and looks like someone with agency. This is causing issues and confusion. Nevertheless, we’re going to have to live with increasingly accurate, human-sounding machines. And as harmless as they can seem now, many people are already concerned about what may happen when very smart, generally intelligent AIs that aren’t necessarily well-aligned with human goals and needs are let loose in the world. More on this in a future edition of Skepsis.
Can Submarines Swim?
In this excellent overview, Jason Crawford explains how Generative AIs and Large Language Models work, as well as how we should think about their capabilities.
Similar to what I said above, when we think about AIs, it can be easy to anthropomorphize them into something they aren’t or ascribe them qualities and goals they don’t have (can’t have). Although it’s true that we actually don’t understand how the best AIs work (how they generate their output), we should be wary of using the wrong analogies when we describe them.
Submarines do not swim. Also, automobiles do not gallop, telephones do not speak, cameras do not draw or paint, and LEDs do not burn. Machines accomplish many of the same goals as the manual processes that preceded them, even achieving superior outcomes, but they often do so in a very different way.
Conversely, there are two mistakes you can make in thinking about the future of AI. One is to assume that its processes are essentially no different from human thought. The other is to assume that if they *are* different, then an AI can’t do things that we consider to be very human.