Experts warn that the anthropomorphization of AI is both potentially powerful and problematicbut that hasn’t stopped companies from trying it. Character.AI, for example, allows users to create chatbots that assume the personalities of real or imaginary individuals. The company would have sought funding that would value it at around $5 billion.
The way in which linguistic patterns appear to reflect human behavior has also attracted the attention of some scholars. MIT economist John Horton, for example, sees potential using these simulated humans – which he calls Homo silicus—to simulate market behavior.
You don’t have to be an MIT professor or a multinational company to get a group of chatbots to communicate with each other. For several days, WIRED has been running a simulated society of 25 AI agents going about their daily lives. Small city, a village with amenities including a college, shops and a park. Characters chat with each other and move around a map that closely resembles the game Valley of Stars. Characters in the WIRED simulation include Jennifer Moore, a 68-year-old watercolor artist who wanders around the house most days; Mei Lin, a teacher often found helping her children with their homework; and Tom Moreno, a wayward trader.
The characters in this simulated world are powered by OpenAI’s GPT-4 language model, but the software needed to create and maintain them was open source by a team from Stanford University. The research shows how linguistic models can be used to produce fascinating and realistic, if rather simplistic, social behavior. It was fun to see them start talking to customers, take naps, and in one case, decide to start a podcast.
Large language models “learned an enormous amount about human behavior” from their extensive training data, says Michael Bernstein, an associate professor at Stanford University who led the development of Smallville. He hopes that agents based on language models will be able to autonomously test software that exploits social connections before real humans use it. He says the project has also received a lot of interest from video game developers.
Video: Fantastic
Stanford’s software includes a way for chatbot-powered characters to remember who they are, what they’ve done, and think about what to do next. “We started to build a thinking architecture where, at regular intervals, agents would sort of write down some of their most important memories and ask themselves questions about them,” Bernstein says. “You do this a few times and you kind of build this tree of higher and higher thoughts.”
According to Bernstein, anyone hoping to use AI to model real humans should remember to ask themselves how closely language models reflect real-world behavior. Characters generated in this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models more accurately reflect reality is “still an open research question,” he says.
Smallville is still fascinating and charming to watch. In one case, described in the researchers’ report paper On the project, the experimenters informed a character that he should have a Valentine’s Day party. The team then observed the agents handing out invitations independently, asking each other for party dates, and planning to show up together at the right time.
WIRED unfortunately wasn’t able to recreate this delicious phenomenon with their own minions, but they still managed to stay busy. Be careful though, running a Smallville instance consumes API credits to access GPT-4 from OpenAI at an alarming rate. Bernstein says running the simulation for a day or more costs more than a thousand dollars. It seems that, just like real humans, synthetics don’t work for free.