TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Generative Agents: Interactive Simulacra of Human Behavior

391 pointsby mmqabout 2 years ago

39 comments

Ozzie_osmanabout 2 years ago
<p><pre><code> To directly command one of the agents, the user takes on the persona of the agent’s “inner voice”—this makes the agent more likely to treat the statement as a directive. For instance, when told “You are going to run against Sam in the upcoming election” by a user as John’s inner voice, John decides to run in the election and shares his candidacy with his wife and son. </code></pre> So that&#x27;s where my inner voice comes from.
评论 #35520106 未加载
评论 #35521986 未加载
评论 #35520830 未加载
评论 #35520346 未加载
评论 #35523285 未加载
评论 #35519661 未加载
评论 #35538389 未加载
评论 #35519424 未加载
alexahnabout 2 years ago
An interesting thought experiment: what would an AGI do in a sterile world? I think the depth of understanding that any intelligence develops is significantly bound by its environment. If there is not enough entropy in the environment, I can&#x27;t help but feel that a deep intelligence will not manifest. This kind of becomes a nested dolls type of problem, because we need to leverage and preserve the inherent entropy of the universe if we want to construct powerful simulators.<p>As an example, imagine if we wanted to create an AGI that could parse the laws of the universe. We would not be able to construct a perfect simulator because we do not know the laws ourselves. We could probably bootstrap an initial simulator (given what we know about the universe) to get some basic patterns embedded into the system, but in the long run, I think it will be a crutch due to the lack of universal entropy in the system. Instead, in a strange way, the process has to be reversed, that a simulator would have to be created or dreamed up from the &quot;mind&quot; of the AGI after it has collected data from the world (and formed some model of the world).
评论 #35520157 未加载
评论 #35523118 未加载
mdanielabout 2 years ago
The previous submission <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35511843" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35511843</a> had just a few comments, but Ian&#x27;s was substantial <i>(although regrettably offsite)</i>: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35514112" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35514112</a> and it especially highlighted the demo URL: <a href="https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;arXiv_Demo&#x2F;" rel="nofollow">https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;arXiv_Demo&#x2F;</a>
评论 #35524330 未加载
lsyabout 2 years ago
I&#x27;d be very hard-pressed to call this &quot;human behavior&quot;. Moving a sprite to a region called &quot;bathroom&quot; and then showing a speech bubble with a picture of a toothbrush and a tooth isn&#x27;t the same as someone in a real bathroom brushing their teeth. What you can say is if you can sufficiently reduce behavior to discrete actions and gridded regions in a pixel world, you can use an LLM to produce movesets that sound plausible because they are relying on training data that indicates real-world activity. And if you then have a completely separate process manage the output from many LLMs, you can auto-generate some game behavior that is interesting or fun. That&#x27;s a great result in itself without the hype!
评论 #35521723 未加载
评论 #35522569 未加载
评论 #35521089 未加载
评论 #35520928 未加载
Imnimoabout 2 years ago
It&#x27;s interesting how much hand-holding the agents need to behave reasonably. Consider the prompt governing reflection:<p>&gt;What 5 high-level insights can you infer from the above statements? (example format: insight (because of 1, 5, 3))<p>&gt;Given only the information above, what are 3 most salient high-level questions we can answer about the subjects in the statements?<p>We&#x27;re giving the agents step-by-step instructions about how to think, and handling tasks like book-keeping memories and modeling the environment outside the interaction loop.<p>This isn&#x27;t a criticism of the quality of the research - these are clearly the necessary steps to achieve the impressive result. But it&#x27;s revealing that for all the cool things ChatGPT can do, it is so helpless to navigate this kind of simulation without being dragged along every step of the way. We&#x27;re still a long way from sci-fi scenarios of AI world domination.
评论 #35519260 未加载
评论 #35519518 未加载
评论 #35519601 未加载
评论 #35519356 未加载
评论 #35524872 未加载
评论 #35519235 未加载
评论 #35519463 未加载
green_man_livesabout 2 years ago
All of this research using GPT to simulate an internal monologue to produce agents reminds me of Julian Jaynes theories about consciousness:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;The_Origin_of_Consciousness_in_the_Breakdown_of_the_Bicameral_Mind" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;The_Origin_of_Consciousness_in...</a>
评论 #35519982 未加载
评论 #35520011 未加载
评论 #35522570 未加载
评论 #35519794 未加载
评论 #35519949 未加载
bundieabout 2 years ago
Interesting paper. I think something like this could be implemented in open world games in the future, no? I cannot wait for games that feel &#x27;truly alive&#x27;.
评论 #35518458 未加载
评论 #35518218 未加载
评论 #35518843 未加载
评论 #35524372 未加载
Jeff_Brownabout 2 years ago
People on Twitter are speculating breathlessly about using this for social science. I don&#x27;t immediately see uses for it outside of fiction, esp. video games.<p>It would be cool if some kind of law of large numbers (an LLN for LLMs) implied that the decisions made by a thing trained on the internet will be distributed like human decisions. But the internet seems a very biased sample. Reporters (rightly) mostly write about problems. People argue endlessly about dumb things. Fiction is driven by unreasonably evil characters and unusually intense problems. Few people elaborate the logic of ordinary common sense, because why would they? The edge cases are what deserve attention.<p>A close model of a society will need a close model of beliefs, preferences and material conditions. Closely modeling any one of those is far, far beyond us.
评论 #35520170 未加载
评论 #35520385 未加载
评论 #35520998 未加载
skilledabout 2 years ago
But the model already has all this info, what is groundbreaking about this? These kind of sensational headlines are not helping anyone either.
评论 #35521128 未加载
评论 #35534600 未加载
评论 #35521262 未加载
cornholioabout 2 years ago
I&#x27;m concerned that the quality of human simulacra will be so good that they will be indistinguishable from a sentient AGI.<p>We will be so used to having lifeless and morally worthless computers accurately emulate humans that when a sentient and worthy of empathy artificial intelligence arrives, we will not treat it any different than a smartphone and we will have a strong prejudice against all non-biological life. GPT is still in the uncanny valley but it&#x27;s probably just a few years away from being indistinguishable from a human in casual conversation.<p>Alternatively, some might claim (and indeed have already claimed) that purely mechanical algorithms are a form of artificial life worthy of legal protection, and we won&#x27;t have any legal test that could discern the two.
评论 #35523049 未加载
评论 #35523252 未加载
评论 #35522940 未加载
xiphias2about 2 years ago
Peeking into these lives sounded amazing until I started reading what they are doing and how boring their lives are…. gathering data for podcasts and recording videos, planning and washing teeth.<p>It would be fun to run the same simulation in the Game of thrones world, or maybe play House of cards with current politicians.<p>Anyways, kudos for being open and sharing all data
评论 #35521092 未加载
og_kaluabout 2 years ago
a good enough simulation interacting with the real word would be no less impactful than whatever you imagine a non-simulation to be.<p>as we agentify and embody these systems to take actions in the real word, i really hope we remember that. &quot;It&#x27;s just a simulation&quot;&#x2F; &quot;It&#x27;s not true [insert property]&quot; is not the shield some imagine it to be.
评论 #35519270 未加载
1letterunixnameabout 2 years ago
Given the state of technology, I cannot be completely certain that none of you are not bots. On the other hand, neither can any of you.<p>Perhaps it would be wise to allow bots to comment if they were able to meet a minimum level of performative insight and&#x2F;or positive contributions. It is entirely possible that a machine would be able to scan and collect much more data than any human ever could (the myth of the polymath), and possibly even draw conclusions that have been overlooked.<p>I see a future of bot &quot;news reporters&quot; able to discern if some business were cheating or exploiting customers, or able to find successful and unsuccessful correlative (perhaps even causal) human habits. Data-driven stories that could not be conceived of by humans. Basically, feed Johnny Number 5 endless input.
评论 #35520181 未加载
评论 #35521143 未加载
评论 #35521254 未加载
评论 #35520085 未加载
评论 #35520414 未加载
评论 #35520148 未加载
neuronexmachinaabout 2 years ago
Reading the abstract reminded me of Marvin Minsky&#x27;s 1980s book &quot;Society of Mind&quot;. I wonder if you could get some cool emergent mind-like behavior from a collection of specialized agents based on LLMs and other technologies communicating with each other:<p>* <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Society_of_Mind" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Society_of_Mind</a><p>* <a href="http:&#x2F;&#x2F;aurellem.org&#x2F;society-of-mind&#x2F;" rel="nofollow">http:&#x2F;&#x2F;aurellem.org&#x2F;society-of-mind&#x2F;</a>
评论 #35523928 未加载
synaesthesisxabout 2 years ago
Some of the most interesting work in this space is in the “shared” memory models (in most cases today, vector db’s). Agents can theoretically “learn” and share memories with the entire fleet, and develop a collective understanding &amp; memory accessible by the swarm. This can enable rapid, “guided” evolution of agents and emergent behaviors (such as cooperation).<p>We’re going to see some really, really interesting things unfold - the implications of which many haven’t fully grasped.
评论 #35522549 未加载
startupsfailabout 2 years ago
Are we sure that these simulations are unconscious? The best answer that I have is: I don’t know…<p>Short term, long term memory, inner dialogue, reflection, planning, social interactions… They’d even go and have fun eating lunch 3 times in a row, at noon, half past noon and at one!
评论 #35520848 未加载
refulgentisabout 2 years ago
This oversells the paper quite a bit, the interactions are rather mundane as the authors note (and I&#x27;m rushing to implement it! it&#x27;s awesome! but not all this)
评论 #35521141 未加载
discmonkeyabout 2 years ago
This paper feels significant. If chatgpt was an evolutionary step on gpt3.5&#x2F;gpt4, then this is bit like taking chatgpt and using it as the backbone of something that can accumulate memories, reflect on them, and make plans accordingly.
评论 #35519046 未加载
评论 #35519096 未加载
d--babout 2 years ago
To me, having not really intelligent agents with humanlike talking abilities is the worst outcome AI could produce.<p>These have zero utility for humanity, cause they’re not intelligent whatsoever. Yet these systems can produce tons of garbage content for free, that is difficult to distinguish from human-created content.<p>At best this is used to create better NPC in video games (as the article mentions), but more generally this is going to be used to pollute social media (if not already).
评论 #35521189 未加载
评论 #35521105 未加载
评论 #35521151 未加载
评论 #35521124 未加载
评论 #35521086 未加载
cwxmabout 2 years ago
Can&#x27;t wait for the next dwarf fortress to include something like this.
crooked-vabout 2 years ago
One thing I find particularly interesting here: The general technique they describe for automatically generating the memory stream and derived embeddings (as well as higher-level inferences about that they call &quot;reflections&quot;), then querying against that in a way that&#x27;s not dependent on the LLM&#x27;s limited context window, looks like it would be pretty easily generalizable to almost anything using LLMs. Even SQLite has an extension for vector embedding search now [1], so it should be possible to implement this technique in an entirely client-side manner that doesn&#x27;t actually depend on the service (or local LLM) you&#x27;re using.<p>[1]: <a href="https:&#x2F;&#x2F;observablehq.com&#x2F;@asg017&#x2F;introducing-sqlite-vss" rel="nofollow">https:&#x2F;&#x2F;observablehq.com&#x2F;@asg017&#x2F;introducing-sqlite-vss</a>
colandermanabout 2 years ago
Another user posted, and deleted, a comment to the effect that the morality of experimenting with entities which toe the line of sentience is worth considering.<p>I&#x27;m surprised this wasn&#x27;t mentioned in the &quot;Ethics&quot; section of the paper.<p>The &quot;Ethics&quot; section <i>does</i> repeatedly say &quot;generative agents are computational entities&quot; and should not be confused for humans. Which suggests to me the authors may believe that &quot;computational&quot; consciousness (whether or not these agents exhibit it) is somehow qualitatively different than &quot;real live human&quot; consciousness due to some <i>je ne sais quoi</i> and therefore not ethically problematic to experiment with.
评论 #35520974 未加载
Baeocystinabout 2 years ago
Looking forward to playing StardewGPT. Half-joking aside, I do think that level of abstraction is probably a good choice. Familiar and comfy, but with enough detail to be able to find interesting social patterns.
qumpisabout 2 years ago
Nice to see progress on this end. I&#x27;ve been hoping for some time for a continuation of AI generated shows (like the previously-famous Nothing Forever) that can 1) interact with the open world and 2) keep history long enough (e.g. by resummarizing and reprompting the model).<p>Controlling the agents and not merely making them output text through LLMs sounds very exciting, especially once people figure out the best way to connect APIs of simulators with the models
courseofactionabout 2 years ago
Something interesting from the paper:<p>The architecture produced more believable behaviour than human crowdworkers.<p>That&#x27;s right, the AI were more believable as human-like agents than humans.<p>What a time to be alive.<p>(See Figure 8)
评论 #35520842 未加载
tucnakabout 2 years ago
I was very disappointed that none of the agents I observed for a whole day got to do the most important &quot;human behaviour&quot;— sex, that is. Tragic
评论 #35521121 未加载
lurquerabout 2 years ago
The ‘safe’ tuning of the models is becoming a nuisance. As indicated in the paper, the agents are overly cooperative and pleasant due to the LLM’s training.<p>Pity they can’t get access to an untuned LLM. This isn’t the first example I’ve read it where research is being hampered by the PC nonsense and related filters crammed into the model.
newswasboringabout 2 years ago
I kid you not, I literally started making something like this yesterday. My plans were smaller, only trying to simulate politics, but still. Living in this moment of AI is sometimes very demoralizing. Whatever you try to make has been made by someone last week. &#x2F;rant
评论 #35522441 未加载
评论 #35522495 未加载
评论 #35523309 未加载
评论 #35532918 未加载
bradgranathabout 2 years ago
Hey! It&#x27;s a proto ancestor sim!
prakhar897about 2 years ago
Meta is also working on this: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;Dan_GPT3&#x2F;status&#x2F;1630669890138025984" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;Dan_GPT3&#x2F;status&#x2F;1630669890138025984</a>
评论 #35521405 未加载
ianbickingabout 2 years ago
I wrote up some notes from reading this paper here: <a href="https:&#x2F;&#x2F;hachyderm.io&#x2F;@ianbicking&#x2F;110175179843984127" rel="nofollow">https:&#x2F;&#x2F;hachyderm.io&#x2F;@ianbicking&#x2F;110175179843984127</a><p>But for convenience maybe I&#x27;ll just copy them into a comment...<p>It describes an environment where multiple #LLM (#GPT)-powered agents interact in a small town.<p>I&#x27;ll write my notes here as I read it...<p>To indicate actions in the world they represent them as emoji in the interface, e.g., &quot;Isabella Rodriguez is writing in her journal&quot; is displayed as<p>You can click on the person to see the exact details, but this emoji summarization is a nice idea for overviews.<p>A user can interfere (or &quot;steer&quot; if you are feeling generous) the simulation through chatting with agents, but more interestingly they can &quot;issue a directive to an agent in the form of an &#x27;inner voice&#x27;&quot;<p>Truly some miniature Voice Of God stuff here!<p>I&#x27;ll see if this is detailed more later in the paper, but initially it sounds like simple prompt injection. Though it&#x27;s unclear if it&#x27;s injecting things into the prompt or into some memory module...<p>Reading &quot;Environmental Interaction&quot; it sounds like they are specifying the environment at a granular level, with status for each object.<p>This was my initial thought when trying something similar, though now I&#x27;m more interested in narrative descriptions; that is, describing the environment to the degree it matters or is interesting, and allowing stereotyped expectations to basically &quot;fill in&quot; the rest. (Though that certainly has its own issues!)<p>They note the language is stilted and suggest later LLMs could fix this. It&#x27;s definitely resolvable right now; whatever results they are getting are the results of their prompting.<p>The conversations remind me of something Nintendo would produce, short, somewhat bland, but affable. They must have worked to make the interactions so short, as that&#x27;s not GPT default style. But also every example is an instruction, so it might also have slipped in.<p>Memory is a big fixation right now, though I&#x27;m just not convinced. It&#x27;s obviously important, but is it a primary or secondary concern?<p>To contrast, some other possible concerns: relationships, mood, motivations, goals, character development, situational awareness... some of these need memory, but many do not. Some are static, but many are not.<p>To decide on which memories to retrieve they multiply several scores together, including recency. Recency is an exponential decay of 1% per hour.<p>That seems excessive...? It doesn&#x27;t feel like recency should ever multiply something down to zero. Though it&#x27;s recency of access, not recency of creation. And perhaps the world just doesn&#x27;t get old enough for this to cause problems. (It was limited to 3 days, or about 50% max recency penalty.<p>The reflection part is much more interesting: given a pool of recent memories they ask the LLM to generate the &quot;3 most salient high-level questions we can answer about the subjects in the statements?&quot;<p>Then the questions serve to retrieve concrete memories from which the LLM creates observations with citations.<p>Planning and re-planning are interesting. Agents specifically plan out their days, first with a time outline then with specific breakdowns inside that outline.<p>For revising plans there&#x27;s a query process where there is observation, then turning the observation into something longer (fusing memories&#x2F;etc), and then asking &quot;Should they react to the observation, and if so, what would be an appropriate reaction?&quot;<p>Interviewing the agents as a means of evaluation is kind of interesting. Self-knowledge becomes the trait that is judged.<p>Then they cut out parts of the agent and see how well they perform in those same interviews.<p>Still... the use of quantitative measures here feels a little forced when there&#x27;s lots of rich qualitative comparisons to be done. I&#x27;d rather see individual interactions replayed and compared with different sets of functionality.<p>They say they didn&#x27;t replay the entire world with different functionality because each version would drift (which is fair and true). But instead they could just enter into a single moment to do a comparison (assuming each moment is fully serializable).<p>I&#x27;ve thought about updating world state with operational transforms in part for this purpose, to make rewind and effect tracking into first-class operations.<p>Well, I&#x27;m at the end now. Interesting, but I wish I knew the exact prompts they were using. The details matter a lot. &quot;Boundaries and Errors&quot; touched on this, but that section was 4x the size, there&#x27;s a lot to be said about the prompts and how they interact with memories and personality descriptions.<p>...<p>I realize I missed the online demo: <a href="https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;arXiv_Demo&#x2F;" rel="nofollow">https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;arXiv_Demo&#x2F;</a><p>It&#x27;s a recording of the play run.<p>I also missed this note: &quot;The present study required substantial time and resources to simulate 25 agents for two days, costing thousands of dollars in token credit and taking multiple days to complete&quot;<p>I&#x27;m slightly surprised, though if they are doing minute-by-minute ticks of the clock over all the agents then it&#x27;s unsurprising. (Or even if it&#x27;s less intensive than that.)<p>You can look at specific memories: <a href="https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;replay_persona_state&#x2F;March20_the_ville_n25_UIST_RUN-step-1-141&#x2F;2160&#x2F;Sam_Moore&#x2F;" rel="nofollow">https:&#x2F;&#x2F;reverie.herokuapp.com&#x2F;replay_persona_state&#x2F;March20_t...</a><p>Granularity looks to be 10 seconds, very short! It&#x27;s not filtering based on memories being expected vs interesting memories, so lots of &quot;X is idle&quot; notes.<p>If you look at these states the core information (the personality of the person) is very short. There&#x27;s lots of incidental memories. What matters? What could just be filled in as &quot;life continued as expected&quot;?<p>One path to greater efficiency might be to encode &quot;what matters&quot; for a character in a way that doesn&#x27;t require checking in with GPT.<p>Could you have &quot;boring embeddings&quot;? Embeddings that represent the stuff the eye just passes right over without really thinking about it. Some of training up a character would be to build up this database of disinterest. Perhaps not unlike babies with overconnected brains that need synapse pruning to be able to pay attention to anything at all.<p>Another option might be for the characters to compose their own &quot;I care about this&quot; triggers, where those triggers are low-cost code (low cost compared to GPT calls) that can be run in a tighter loop in the simulation.<p>I think this is actually fairly &quot;believable&quot; as a decision process, as it&#x27;s about building up habituated behavior, which is what believable people do.<p>Opens the question of what this code would look like...<p>This is a sneaky way to phrase &quot;AI coding its own soul&quot; as an optimization.<p>The planning is like this, but I imagine a richer language. Plans are only assertive: try to do this, then that, etc. The addition would be things like &quot;watch out for this&quot; or &quot;decide what to do if this happens&quot; – lots of triggers for the overmind.<p>Some of those triggers might be similar to &quot;emotional state.&quot; Like, keep doing normal stuff unless a feeling goes over some threshold, then reconsider.
评论 #35520277 未加载
gololabout 2 years ago
It&#x27;s a pretty obvious idea executed well. I definely think symbolic AI agents written in the programming language english and interpreted using LLMs is the way forward.
FestiveHydra235about 2 years ago
Maybe I missed it in the paper but they did post the source code (Github) for their implementation? Is anyone working on creating their own infrastructure based on the paper?
评论 #35520071 未加载
评论 #35520196 未加载
amrbabout 2 years ago
<a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Strange_loop" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Strange_loop</a>
jsemrauabout 2 years ago
this is a really important conversation that we are not having. Based on whose character are we modelling these agents?<p>If we rely on online conversations for the training we need to realize that this is a journey to the dumbest common denominator.<p>Instead, I believe we should look at the brightest and universally morally accepted humans in history to train them.<p>Maybe I would start my list like that:<p>1. Barack Obama.<p>2. Jean-Luc Picard (we can rely on work of fiction).<p>3. Bill Gates.<p>4. Leonardo Da Vinci.<p>5. Mr Rogers<p>6. ???
评论 #35519485 未加载
评论 #35519681 未加载
评论 #35525828 未加载
评论 #35533274 未加载
fabiensnauwaertabout 2 years ago
Does anyone know which engine they used for the cute 2D rendering? Or is it custom-built?
评论 #35533150 未加载
评论 #35522964 未加载
MrPatanabout 2 years ago
It&#x27;s about to get weird. How do I get investment exposure to the Amish?
creamyhorrorabout 2 years ago
I love what this project has done. Currently they&#x27;re basically having to work around the architectural limits of the LLM in order to select salient memories, but it&#x27;s still produced something very workable.<p>Language is acting as a common interpretation-interaction layer for both the world and agents&#x27; internal states. The meta-logic of how different language objects interact to cause things to happen (e.g. observations -&gt; reflections) is hand-crafted by the researchers, while the LLM provides the corpus-based reasoning for how a reasonable English-writing human would compute the intermediate answers to the meta-logic&#x27;s queries.<p>I&#x27;d love to see stochastic processes, random events (maybe even Banksian &#x27;Outside Context Problems&#x27;), and shifted cultural bases be introduced in future work. (Apologies if any of these have been mentioned.) Examples:<p>(1) The simulation might actually expose agents to ideas when they consume books or media, potentially absorb those ideas if they align with their knowledge and biases, and then incorporate them into their views and actions (e.g. oppose Tom as mayor because the agent has developed anti-capitalist views and Tom has been an irresponsible business owner).<p>(2) In the real world, people occasionally encounter illnesses physical and mental, win lotteries, get into accidents. Maybe the beloved local cafe-bookstore is replaced by a national chain that hires a few local workers (which might necessitate an employment simulation subsystem). Or a warehouse burns down and it&#x27;s revealed that an agent is involved in a criminal venture or conflict. These random processes would add a degree of dynamism to the simulation, which is more akin to the Truman Show currently.<p>(3) Other cultural bases: currently, GPT generates English responses based on a typically &#x27;online-Anglosphere-reasonable&#x27; mindset due to its training corpus. To simulate different societies, e.g. a fantasy-feudal one (like Game of Thrones as another commenter mentioned), a modified base for prompts would be needed. I wonder how hard it would be to implement (would fine-tuning be required?).<p>Feels like I need to look for collaborative projects working on this sort of simulation, because it&#x27;s fascinated me ever since the days of Ultima VII simulating NPCs&#x27; responses and interactions with the world.
explaininjsabout 2 years ago
If there&#x27;s one category of people I trust to identify authentic human social behavior, it&#x27;s CS students at Stanford.
评论 #35521228 未加载
评论 #35521145 未加载