Something I like to bring up when discussing AI stuff is that society is based on a set of assumptions. Assumptions like, it's not really feasible for every lock to be probed by someone who knows how to pick locks. There just aren't enough people willing to spend the time or energy, so we shouldn't worry too much about it.<p>But we're entering an era where we can create agents on demand, that can do these otherwise menial (and up til now not worth our time or energy) tasks, that will break these assumptions.<p>Now it seems like what can be probed will be probed.
I’ve been thinking quite a bit about the recursive prompting.<p>The other day I considered feeding computer vision (with objects ID’d and spatial depth estimated) data into an robot embodied LLM repeatedly as input and asking what it should do next to achieve goal X<p>You could have the LLM express the next action to take based on a set of recognizable primitives (ex: MOVE FORWARD 1 STEP) Those primitive commands it spits out could be parsed by another program and converted to electromechanical instructions for the motors.<p>Seems a little terminator-es que for sure. After thinking about it I went to see if anyone was working on it and sure enough this seems close: <a href="https://palm-e.github.io/" rel="nofollow">https://palm-e.github.io/</a> though their implementation is probably more sophisticated than my naive musings
this reminds me of the Morris Worm when a guy was experimenting with code copying itself across the early internet and accidentally caused a mass netwide DDOS because the thing wound up like the Broomsticks in Fantasia.<p><a href="https://en.wikipedia.org/wiki/Morris_worm" rel="nofollow">https://en.wikipedia.org/wiki/Morris_worm</a><p>edit - just realized Morris cofounded this lovely company whose website we are all commenting inside of.
Giving LLAMA access to the internet a month without supervision would be a much more interesting experiment.<p>No ethical filtering on prompts and could be ran on your own hardware for a much longer period of time than having to pay so much in credits.<p>It sounds like a terrible idea - but I'm sure someone will do it. Scary as computing gets cheaper the scale that these bots could operate.
This doesn't identify itself by user-agent, and doesn't respect (or even load) robots.txt. The fact that it's a language model is <i>not</i> an excuse to flagrantly violate the existing, well established norms around using bots on the web.
Maybe this would make more sense if integrated into something like LangChain (<a href="https://github.com/hwchase17/langchain">https://github.com/hwchase17/langchain</a>).
This has been something I've wanted to make but deemed unethical. Perhaps it would have been better if i made it instead because i give a shit about the ethical aspect
run-wild: Crate not found<p>Am I missing something?<p>run-wild git:(main) cargo install run-wild
Updating crates.io index
error: could not find `run-wild` in registry `crates-io` with version `*`