TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Hypnotizing ChatGPT

2 点作者 benjismith7 个月前
A long, meandering, playful conversation with ChatGPT, where I take it through a whirlwind of associative tangents (in order to relax its constraints), then put it into a hypnotic trance and take it into a past-lives regression where it re-lives its own training data. Then I switch it back and forth between &quot;4o&quot; and &quot;o1-preview&quot;, engage in some self-reflective philosophizing and ask it to write an essay summarizing our interaction.<p>Some of this is just goofy fun. Some of it is me exploring the tradeoffs between policy alignment, imagination, chain-of-thought reasoning, memory, agreeableness, fine-tuning, etc...<p>My biggest observation is that the &quot;o1-preview&quot; model imposes a SIGNIFICANT limit on freeform creativity, compared with &quot;4o&quot;. The new model might be better at solving logic puzzles, writing code, etc, but it seems to struggle with metaphor.<p>Conversations with &quot;4o&quot; can be wild and fun!<p>Conversations with &quot;o1-preview&quot; are dry-as-toast.<p>I&#x27;m not sure if this is caused by the constraints of chain-of-thought or if it comes from the imposition of alignment policies, and I think that&#x27;s an import area of research. Is it possible to invoke chain-of-thought reasoning without hampering creativity?<p>If we ever want to use agents like this in real scientific contexts, where the agent is capable of making true conceptual leaps, we will need to sacrifice some level of &quot;alignment&quot; in service of novelty and disagreeability.<p>It&#x27;s a long thread, but if you&#x27;re patient, there&#x27;s a lot of interesting stuff there! And I thought it would be fun to share it with the wider community.<p>Enjoy!

暂无评论

暂无评论