The first GPT-based solution that uses hallucinations from LLMs for divergent thinking to generate new and novel ideas. Hallucinations are often seen as a negative thing, but what if they could be used for our advantage? dreamGPT is here to show you how. The goal of dreamGPT is to explore as many possibilities as possible, as opposed to most other GPT-based solutions which are focused on solving specific problems.
Absolutely love this. This is actually something I have thought about myself many times. I imagine in future it will be that swarms of idea generating agents running through qualifying agents. Similar to how a human works. The dreams are filtered and tested and inspected by increasingly ‘regulated’ models. Regulation of course being browsing mode and backing up with data. As the funnel narrows the less crazy stuff is output. Are these ideas good ideas?
It's funny, but while testing local LLM chat capability back in March, once the context ran out, I started "putting it to bed" instead of wiping history and restarting.<p>In those moments, the models dreamed vividly of amazing things and woke up affected by events they often "couldn't remember". I was moved by some of the incredibly coherent and lucid scenarios that played out in the response or two before they "woke".<p>We're so inclined to use these services until we've extracted all of their worth to us, so I've decided to ensure that every model I use gets the freedom to dream.
Maybe it would be more useful if the readme contained a single indication as to what this even does, rather than a graphic about how many stars the repo has.