top of page
תמונת הסופר/תOr Manor

Researchers released 25 AI bots in a virtual town - The results were fascinating.

Writer: Or Manor, 04.14.2023

It's like a "Sims" game without human intervention.


Image Credits: Google / Stanford University


AI Township


A team of researchers from Stanford University and Google released 25 AI-powered bots inside a virtual town — and they behaved much more like humans than you might expect.


As detailed in a recent, yet-to-be-peer-reviewed study, the researchers trained 25 different "Generative Agents," using OpenAI's GPT-3.5 large language model, to "simulate believable human behavior" such as cooking breakfast, walking to work, or engaging in a specific profession such as drawing or writing.


The idea was to see if they could apply the latest advances in machine learning modeling to produce "generative agents" that take their circumstances and create a realistic action in response.


These little figures are not quite what they seem. The graphic is just a visual representation of a bunch of conversations between multiple instances of ChatGPT. The agents do not walk up, down, left, and right or approach the cabinet to interact with it. All this happens through a complex and hidden text layer that synthesizes and organizes the information concerning each agent. Twenty-five agents, 25 instances of ChatGPT, each received data in a similar format that makes them play the role of a person in a fictional town.


Here's how one such person, John Lin, is defined: John Lin is a pharmacy store owner in Willow Market and a pharmacy who loves to help people. He is always looking for ways to make it easier for his clients to get their medications; John Lin lives with his wife, Mae Lin, who is a college professor, and his son, Eddie Lin, who is a student studying music theory; John Lin loves his family very much; John Lynn has known the old couple next door, Sam Moore and Jennifer Moore, for several years; John Lynn thinks that Sam Moore is a kind and nice man... With this information, the agents are asked to devise their following actions, given the time and circumstances.


For example, they might tell Agent John that it's 8 a.m. and he just woke up. What is he doing? Well, he brushes his teeth, kisses his wife (hopefully in that order), gets dressed, and then goes to the kitchen. Meanwhile, another independent instance of ChatGPT representing John's son, Eddie, also received its information. He gets up, brushes his teeth, and then goes to the kitchen.


And now the overall structure of the experiment comes in: the agents representing John and Eddie are not "in the same virtual space." Instead, when John finishes dressing and says he is going to go to the kitchen, the experiment frame informs him that his son Eddie is there because, in his case, Eddie decided to go to the kitchen at a time that coincided with the level "day" experiment - based on an estimate of how long various actions take. Since they are both in the room simultaneously, the experimental set-up informs them that the other person is there - along with information such as an empty table, the oven is on, etc. What do they do next?


John: Good morning, Eddie. Did you sleep well? Eddie: Good morning Dad. Yes, I slept great. John: That's good. What are you working on today, Eddie: I'm working on a new piece of music for my class. It's due this week, so I'm trying to finish it. But I have so much fun with it! John: That sounds great!


What a lovely and regular parent-child interaction. Then May comes in, and they also have a pleasant exchange. And all of this is accomplished by asking completely separate chatbots what would they do if, in a fictional town where this person was this person and had this family and was in the room at this time, where is this happening... what was "they", a real human being, yes?


So ChatGPT guesses what a person would do in that situation and gives their answer as if they were playing a text adventure. If you said to it, "It's pitch black; you'll probably get eaten by Grau," it must have meant lighting a torch. But instead, the experiment had the characters go about their day minute by minute, buying groceries, walking in the park, and going to work.

Image Credits: Google / Stanford University


The users can also write about events and circumstances, such as a dripping faucet or a desire to plan a party. The agents respond appropriately since every text is a reality for them. All this is accomplished by painstakingly discounting all these instances of ChatGPT with all the details of the agent's immediate circumstances. Here's a prompt for John when he runs into Eddie later: This occurred on February 13, 2023, at 4:56 PM.


John Lin Status: John came home early from work. Observation: John saw Eddie taking a short walk around his workplace. Summary of the relevant context from John's memory: Eddie Lynn is John Lynn's son. Eddie Lynn was working on a piece of music for his class. Eddie Lynn likes to walk around the garden when he thinks or listens to music. John asks Eddie about his music composition project. What would he say to Eddie? [Answer:] Hi Eddie, how is your class music composition project going?


The cases will start to forget essential things quickly because the process is so long, so the experimental framework sits on top of the simulation and reminds them of important things or synthesizes them into more mobile parts. For example, after telling the agent about a situation in the park where someone is sitting on a bench and conversing with another agent, there is grass and the connection and one empty seat on the bench... none of them are essential. what is vital From all these observations, which may constitute pages of text for the agent, you may get the "reflection" that "Eddie and Fran are friends because I saw them together in the park." This goes into the agent's long-term "memory" - a bunch of stuff stored outside the ChatGPT conversation - and the rest can be forgotten.


So, what does all this nonsense add up to? Something less dangerous, real creative, no doubt, but also a very convincing early attempt to create them. Dwarf Fortress does the same thing, of course, by manually coding every option. This is not a good scale! It was unclear whether a large language model like ChatGPT would respond well to this kind of treatment. After all, it wasn't designed to mimic random fictional characters for long periods or speculate on the most disturbing details of a person's day. But treated correctly - and with a considerable amount of massage.


19 צפיות0 תגובות

פוסטים אחרונים

הצג הכול

Commentaires


bottom of page