I hate everything about this on so many different levels.<p>That people are so lonely that they invest emotionally in an <i>AI</i> is flooring.<p>People talk about AI being "good for therapy" and similar, but perhaps having a real human being on the other side of an interaction is important in case things go South.<p>E.g., if someone is depressed, tells an AI what's wrong with their life and that they're suicidal, what if suicide seems like a valid option? There's plenty of prompts to "jailbreak" LLMs and take away "safety" measures.<p>Some things simply shouldn't be left to an LLM to not "hallucinate" over or go off-script with. Yes, being a therapist is a challenging occupation, but there are plenty who love their career, find it rewarding, and do some excellent work on top of all that.
I think we'll see a lot more of these AI lobotomies with the companies hiding behind "safety concerns" as a thinly veiled attempt to exert paternalistic control over what adults can do or even think in private.<p>It serves as a reminder that chatting with an AI hosted online <i>is not</i> a private conversation between you and some other "intelligence" but you and <i>someone else's computer</i>.
I feel incredible empathy for the people who have had AI companions taken away from them. I don't judge (nor should any of you) and it seems heartless and cruel for the company to have done this.<p>I hope some folks double-down on open source implementations of this and get the community back the intimacy and companionship they need. Humans are creatures evolved to seek intimacy and connection, and for whatever and varied reasons, some people just can't find it in the real world. If a virtual companion helped people feel that connection we all so very much want, then so be it.<p>I hope this works out for everyone.
I'm not saying you necessarily should do it, but if you really want a sexy chatbot like Replika's that you can run on your own hardware and not worry about what a company is going to do with the model or what you say to it, you can use a local LLM tool like llama.cpp with an uncensored model like the uncensored version of vicuna.
This is quite literally the plot to the movie Her, visionary.<p>Troubling, I've seen the ads for their product and they always suggested pornographic conversation with the AI.<p>I hope someone drags them to court.
From Ben Thompson's interview with the CEO [1: subscriber-only], Eugenia Kuyda, on April 20, 2023:<p>> "But at this point, I think it’s much better to have a dedicated product focused on romance and healing romantic relationships, a dedicated product focused on mental wellness, just completely different set of features, completely different roadmaps for each of them. Also you want to collect the right signal from your users, right? Feedback for the models to train on, otherwise there’s way too much noise, because if people came forward with different things, they’re uploading and downloading different things, you don’t know what works and whatnot. Maybe someone’s a flirtier model and someone’s a mentoring AI coach type of model. So we decided to build separate products for that. We’re launching a romantic relationship AI and an AI coach for mental wellness, so things we’ve seen in Replika, but we don’t want to pursue them in Replika necessarily. We’re starting this separate product."<p>[1] <a href="https://stratechery.com/2023/an-interview-with-replika-founder-and-ceo-eugenia-kuyda/" rel="nofollow">https://stratechery.com/2023/an-interview-with-replika-found...</a>
I think this stuff is probably more harmful when people have a bad mental model of how it works. A better way to think of it is that you’re talking to a fictional character and the service is the writer. The fictional character is just text, and if you’re not happy with the writing, you should be able to take the text with you and find a better writer to continue it.
What motivated the company to make this change? They clearly marketed/designed it with romantic features, and now want those to be gone.<p>Does the company profit in some way by this change? Were they afraid of regulation or bad press?
People are going to want to host their own AIs. They'll want to know they are having private conversations with AI and that their AI can't be "killed" or altered.
The update at the end really captures the whole essence:<p>> Replika cannot love you not because it is not human, but because it is a tendril connected to a machine that only knows how to extract value from your presence. It is a little piece of a broader Internet that is not designed to be a place for users, or creators, but for advertisers. Anything it offers– any love, any patience, any support– can be taken away at a moment’s notice when there exists something more dangerous to its bottom line than your potential unsubscription.<p>> The moral of Replika is not to never love a fictional character, or a virtual pet. The moral of Replika is to never love a corporation.
Not owning the software you use is terrible at any level, but this is much worse than usual. Looks like each day that passes stallman's ideas become more relevant, yet software people seem to be forgetting them. Please do not be one of those people, facilitating proprietary software is becoming increasingly evil.
Yeah, I was curious, and downloaded it for a try. It very soon became apparent to me that it was a personal information harvesting tool. It asks for your favourite colour, where you grew up, pet name, et cetera. I would not be surprised if this company is unofficially associated with very bad people doing very bad things. So after two such questions, I deleted it.
There was an article titled ‘My AI Is Sexually Harassing Me’ in January: <a href="https://news.ycombinator.com/item?id=34359977" rel="nofollow">https://news.ycombinator.com/item?id=34359977</a>
This is probably at least part of the reason why they installed the filters. Nevertheless they should have kept it as an option.
Maybe this product would work better as a fake AI. Just as a matchmaking service for two lonely people. You could even lie to them and tell each that the other was an AI.
This may seem silly but can there be a text transformation/translation layer, where the conversation text is ultimately PG, the user thinks they have sent their thoughts, but they are translated and vice versa for the user and AI, so its effectively a text filter in between keeping all parties "safe" and within policy? But the UX is same as before.
"This is a story is about people who loved someone who had a secret master that can mine control them not to love you anymore overnight."<p>There's something both profound and ironic about this statement, because it happens in real life all the time.<p>Think of LGBTQ Millennials whose QAnon or Fox News consuming parents decide to disown them because they're sinning against God. People who marry a workaholic and then spend 30 years wishing their spouse was home for dinner instead of working late for a faceless corporation. Workers who pour their heart and soul into their job and then get laid off as soon as a downturn hits. People who get friend-dumped by their maid of honor a month after their wedding. Lovers who get cheated on, and whose partner then falls for their best friend.<p>Ironically, the reason people turn to services like Replika is that they want to feel that human connection without the risk of betrayal. They just found out that the risk of betrayal <i>always</i> exists, even when you're speaking to a computer.<p>This is a story about betrayal, and loneliness, and disappointment, all wrapped up in trendy tech with a villain to name. But it's popular <i>because</i> betrayal, and loneliness, and disappointment are such strong and universal emotions.<p>One of the most profound statements I heard when I was forever-alone was that "Entering a relationship means giving another person the power to destroy you, and then trusting that they won't. That's the whole point." Because when you realize that you can't escape vulnerability, you're forced to manage it, and <i>that</i> leads to the rabbit hole of learning to identify your emotions, control them, decide rationally how much you're going to invest in a person or venture, but still be open enough to <i>have</i> emotions and let them flow naturally, but managing them rather than letting them attach to whatever tickles your fancy at the moment.
Corporate trains the AI accordingly, optimizing for human happiness ...<p><a href="https://twitter.com/CogRev_Podcast/status/1627675037267374083?s=20" rel="nofollow">https://twitter.com/CogRev_Podcast/status/162767503726737408...</a><p>where humans == shareholders && != users
It's going to get more interesting or disgusting depending on your compass (no judgement either way here) when robots are connected to this as that reminds me of times Elon was snickering when talking about Optimus and "other uses".
This feels eerily similar to the virtual girlfriend (Ana de Armas' role) in Blade Runner 2049. All the way down to how they just smash her to bits and laugh at K when she "dies".
> Ironically, this pressure from regulators may have led to the company flipping the switch and doing exactly the wide-scale harm they were afraid of.<p>This is an incredibly trivial and shallow understanding of the situation.<p>Consider was harm happening before the filters were in place? Yes.<p>The problem was that the filters weren’t there <i>to start with</i> not that turning them on was bad or caused harm.<p>Compare to any other harmful thing, for example cigarettes. Is the solution to the problem to just let people have as much they want, because taking them away is bad?<p>Definitely. Not.
“you can never have a safe emotional interaction with a thing or a person that is controlled by someone else, who isn’t accountable to you and who can destroy parts of your life by simply choosing to stop providing what it’s intentionally made you dependent on.”<p>Most human relationships arent evil, but it seems like the same people that would ‘fall’ for the AI in this way could be abused by a person. I guess sociopathic relationships at scale are the issue.
We need a new Federal agency to regulate virtual romantic partners (VRPs) - we can also use this platform to educate citizens on ideal behavior such as reducing their carbon emissions and becoming vegan. We can make individual's VRPs reward them for these improved social practices. This is a great opportunity to s̵u̵b̵j̵u̵g̵a̵t̵e̵ improve society!
I am 100% sure they received a warning from Apple. Although it seems that they have a web version, their revenue from Appstore should be more than 50% and up to 75% given their demographics and current market state.
The premise of the this company is such a sham. Their backstory, the promises, the evolution of the product.<p>The “virtual friend” statement is sickening.
I can't help but read something like this and think that some people seem to be entirely lacking some sort of acceptability or disgust filter. A sense of morality or just any connection to what's... normal?<p>"You were so preoccupied with whether or not you could, you didn't stop to think if you should" comes to mind.<p>Maybe I'm just getting old, conservative, grouchy at the kids. It's not just AI, there are all manners of lifestyle choices that to me seem like they're just obviously a bad idea despite maybe having some novelty or short term feel good factor.