> Claude feels not only safer but more fun than ChatGPT.<p>I may be the minority here, but I really don't concern myself with ChatGPT safety and I am not entirely sure what the reason is why people are very worried about its safetly. It is safer than most things I have in my house, including a kettle, a saw, a hammer, a screwdriver, my actual PC, every kitchen appliance I have.<p>Of course it can be misused, like any tool, but no amount of safety features in ChatGPT will make users of it more or less careful in their use of it. If someone using ChatGPT cares nothing for using it safely then it will likely end poorly, just like it will end poorly if I use a hammer without any care for using it safely.
They say Claude is "more verbose", and claim this is a positive. I disagree. My biggest criticism of ChatGPT is that its answers are extraordinarily long and waffly. It sometimes reminds me of a scam artist trying to bamboozle me with words.<p>I would much prefer short, concise, precise answers.
> That Claude seems to have a detailed understanding of what it is, who its creators are, and what ethical principles guided its design is one of its more impressive features.<p>This doesn't show a detailed understanding of what it is, it's just a canned/trained response. I don't see why that would be impressive. When I receive such a response from an automated helpdesk, I don't think "Wow, this AI has a great understanding of what it is."
Is this not a superficial attempt at saftey?<p>I would like my AI system to tell me how to hotwire a car if I am curious about how that works.<p>I would like my AI system to give me a detailed step by step car hotwire walkthrough if I am in a physically abusive relationship and my kids and I only have 30 minutes to try to hotwire the car and escape a remote area for safety.<p>I do not want AI systems to create children's books in the style of authors that I know, for the purposes of selling books and reducing my friends' ability to have a happy productive life. Especially because it was trained on their work. I want my friends to be happy, and I have had some friends commit suicide. So maybe improving human happiness is a saftey concern, and generating kids books is not safe. But that doesn't look like "safety" from a superficial point of view.<p>The only way for an AI to be able to make judgements on safety is for it to have general intelligence and some life experience (like we do). Because it needs to figure out context to know if it should be telling a particular person how to hotwire a car.<p>I am being very dismissive because I don't see this as being a perfect solution, and it is easy to see why. But maybe someone who works on this can explain how an imperfect solution still has value? I am open to that possibility.<p>Maybe self-reflection and self-tuning is of general value - even if it only superficially addresses safety concerns in a 1 dimensional way.<p>Perhaps these techniques can be used on something other than safety.
I'm hoping of one day running GPT3/ChatGPT on my local computer, similarly to how one can run Stable Diffusion now.<p>I would love to have a personal conversation with these AI systems, use them as a sort of assistant, without the worry of being spied on. At the moment I can't use it as more than a glorified search engine, because of the privacy implications of running it on the cloud.
Here is a fun example of what it can do: <a href="https://twitter.com/jayelmnop/status/1612243602633068549" rel="nofollow">https://twitter.com/jayelmnop/status/1612243602633068549</a>.
Hello HN — I’m the coauthor of this post. You may remember me as that guy who spent most of 2022 posting GPT-3 screenshots to Twitter, most famously prompt injection and “You are GPT-3”. Happy to answer any questions about Claude that I can.
Related:<p><i>Anthropic's Claude is said to improve on ChatGPT, but still has limitations</i> - <a href="https://news.ycombinator.com/item?id=34331396" rel="nofollow">https://news.ycombinator.com/item?id=34331396</a> - Jan 2023 (52 comments)
Semi offtopic, but for some time I have been dreaming of training chatbot to communicate in cuneiform or hieroglyphs to bring some old languages back alive. Could it be possible, using old tablets as training data?
Definitely humor is in the eye of the beholder. I find the Seinfeld jokes by ChatGPT wittier and funnier than the run-of-the-mill comments created by Claude.<p>I don't know how well they are in character, and there's a clear repetition problem (which Claude somewhat also exhibits), but I find the format from ChatGPT more exaggerated, as expected from a comedy routine.
Somebody is maintaining an awesome claude repo with claude use cases, claude vs chatgpt comparisons as well.
<a href="https://news.ycombinator.com/item?id=34404536" rel="nofollow">https://news.ycombinator.com/item?id=34404536</a>
I just read that their chatbot will update word-by-word Slack channels, justifying the need for edits and an emoji to acknowledge the interaction is over. Why do they ensure that the appearance happens "word-by-word"? Is that a trick to reduce the response time or is that a design feature (that feels very much like a flaw to me)?
I like to compare these models to the Star Trek main computer core. The computer on a starship is explicitly not self aware, but has to interface with humans through mostly voice comms. It has to give accurate information for ship operations, something the chatbots so far still get wrong on occasion (or slip up details)<p>The ships computer also doesn’t seem to do entertainment like “tell a bedtime story” , since holography exists and does a better job. Now those might be closer to chatbots current evolution.
Video demo: <a href="https://youtu.be/B7Mg8Hbcc0w" rel="nofollow">https://youtu.be/B7Mg8Hbcc0w</a><p>More info on Claude's principles/Constitution: <a href="https://lifearchitect.ai/anthropic/" rel="nofollow">https://lifearchitect.ai/anthropic/</a>
we need less censorious AIs not more ...<p>the claim that it's somehow 'ethical' to have a guy baking in his opinions about things in a tool used globally is absurd to anyone who ever read anything about ethics
Imagine an android connected to the vast network of information (ChatGPT-like). The android could generate various responses in real-time, just by vocalizing the approriate text.<p>It might be clunky at first, but it's a good starting base to improve upon. The android could, for example, store common and everyday responses in it's RAM, making it semi-capable of autonomous speech.<p>Then, it could use that information to further train itself, essentialy creating a local model of it's own behaviour. In other words, it could learn.
I appreciate the comparison and some of the prompts. I didn't even think to play with multi-hop questions.<p>Anyone remember Ask Jeeves? This feels like what it should have been.
And this is where OpenAI earns their "open" remark. Anyone can use ChatGPT. Anthropic Claude (incredibly amusing name to me) is not so accessible.
These models are using industry ML Algos and known techniques. It's not like some unknown startup suddenly discovered, gradient descent or Deep Learning RNN's and is keeping these confidential. Why would Microsoft consider it worthwhile to even contemplate the possibility of paying $10B for these or similar?
Signup form is here, but it's closed <a href="https://twitter.com/AnthropicAI/status/1604929999743508480?s=20&t=X3seo3gYmeC3heZZvN1qEg" rel="nofollow">https://twitter.com/AnthropicAI/status/1604929999743508480?s...</a>
I imagine in the next decade we are about to be introduced to different AIs like new 6yo children in the class. Each one have different "parents", traits and personalities.
Does this mean microsofts potential billion dollar aquisition of openAI is a bad idea because the IP is already out there and other companies are catching up?
so can we try it? can't find a link.<p>also, why is everything now named with common names and nouns? it makes annoyingly hard to google informations around them.