Here’s a list of some terms used by AI insiders:
AGI — AGI stands for “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, that can do most things as well or better than most humans, including improving itself.
Example: “For me, AGI is the equivalent of a median human that you could hire as a coworker, and they could say do anything you would be happy with a remote coworker doing behind a computer,” Sam Altman said at a recent Greylock VC event.
AI ethics describes the desire to prevent AI from causing immediate harm, and often focuses on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.
AI safety describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.
Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.
Example: “What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society,” Sam Altman said last week during the Senate hearing.
Emergent behavior — Emergent behavior is the technical way of saying that some AI models show abilities that weren’t initially intended. It can also describe surprising results from AI tools being deployed widely to the public.
Example: “Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely,” Microsoft researchers wrote in Sparks of Artificial General Intelligence.
Fast takeoff or hard takeoff — A phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.
Example: “AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman in a blog post.
Foom — Another way to say “hard takeoff.” It’s an onomatopeia, and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.
Example: “It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun.
GPU — The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most commonly used model at the moment is Nvidia’s A100.
Example: From Stability AI founder Emad Mostque:
Guardrails are software and policies that big tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect the AI from going off topic, like Nvidia’s “NeMo Guardrails” product.
Example: “The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests,” Christina Montgomery, the chair of IBM’s AI ethics board and VP at the company, said in Congress this week.
Inference — The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.
Example: “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT. It went to like a million users in five days. There is no way your GPU capacity can keep up with that,” Sid Sheth, founder of D-Matrix, previously told CNBC.
Large language model — A kind of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that seems like a human wrote it.
Example: “Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing its to perform more advanced coding, math and creative writing tasks,” CNBC reported earlier this week.
Paperclips are an important symbol for AI Safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.
Example: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” Bostrom wrote in his thought experiment.
Singularity is an older term that’s not used often anymore, but it refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.
Example: “The advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.
Stochaistic Parrot — An important analogy for large language models that emphasizes that while sophisticated AI models can produce realistic seeming text, that the software doesn’t have an understanding of the concepts behind the language, like a parrot. It was coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell, in a contentious paper written while they were at Google.
Example: “Contrary to how it may seem when we observe its output, a [language model[ is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.” from On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Training — The act of analyzing enormous amounts of data to create or improve an AI model.
Example: “Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” — Future of Life Institute open letter