This story is really interesting for all the not-specific-to-quora parts. It’s pretty obvious that quora has a strange culture of know-it-alls trying to show off, but this AI product makes it clear that the act of showing off <i>is the product</i>. The answers are garbage and they obviously don’t care. The key action for sale is not finding an answer, but writing it.<p>What’s really interesting is that Quora product team decided they wanted to juice engagement with fake AI created posts, and real humans are complying to answer them. Questions that make no sense, and no human asked. Quora is proving (again) that people will engage in parasocial behavior with LLMs. We’ve seen AI girlfriends and companions, we’ve seen tutors, and knowledge assistants. This is the opportunity to brag - artificial students (likely cut from the same API as those tutors).<p>I can only assume that question-answering-users are the real market they advertise to, profitably. This would imply that this strategy is profitable or could be profitable for them. The nice thing about this compared to more personalized LLM companions like AI girlfriends is that the inference costs can be distributed across all users by sharing the questions. I would posit that there exists an opportunity to make a Twitter or Reddit clone that explicitly intermixed bots. Maybe even exclusively bots. People want to talk on the internet and told they’re special. The interaction doesn’t need to be organic.<p>For proof, look at all the top-level comments on this HN post with no engagement, by users who have repeated the same pattern discussed about quora.
I wonder if it would really come down to AI if you did the user research, maybe focused on the “onboarding” process. I remember a time back when you would get a Quora result on a search query and you could click on it be find a few answers with comments on them to help you determine their merits.<p>This is completely anecdotal, but these days I honestly can’t even find the answer if I click on a Quora result on a search engine. Instead I’m created by “similar” questions and what not in an UI I don’t really understand too well. Now, I was never a Quora user, as in, I never signed up for an account. So I’m probably not in their target audience. With their modern entry point, however, I’ve simply banned their domain from my search results along with sites like Pinterest because they are essential just a wasted of time if I accidentally hit their sites.<p>Maybe that doesn’t matter. I would have probably never signed up to actually answer things, or even ask, but if anyone who would be potential user is like me. Then they won’t ever get to the “onboarding” process of joining the site, and I’d wager that was more damaging than AI. LLM’s are more akin to the final nail in the coffin for a lot of these sites who have made it so user hostile to join their “communities”. Again, it’s just anecdotal, I’ve done or read no research. But I do think that it’s interesting that we’re now at a point where many people will include “Reddit” in their search queries when looking for answers, and Reddit is the easiest side in the world to join or even read without signing in (at least old.reddit is).
From my point of view the underlying problem here seems to be that someone in Quora product team feels that the best way to drive engagement is to predict what the user might want to know about and then spam them with it, rather than letting them discover on their own or use their feedback.<p>I stopped using Quora 8 or 9 years back, because despite how much time I spent curating the feed I was served, the questions always seemed to veer back into the questions on life experiences, `write a short story in 3 lines` or about relationships. This was most probably due to topics like these being a fad in India during the time and often received a lot of answers or upvotes.<p>Every time I marked an question as something I was not interested in, because of the topic, Quora assumed that I did not like the author's answer and then proceeded to show 5 other questions the author has responded to. I assumed it was because of some bug on their side ignoring my preferences of topics, until I spent a week in Dubai and saw most of the questions in my feed match the topics I marked as interested.
Have you ever spoke with someone high on crack? Or a schizo?<p>That's how I feel when reading an answer in Quora after they decided to randomly mix answers to <i>other</i> questions into the one you're interested in.<p>Whoever decided to do that should be fired on the spot and never allowed to touch a product in its life.
Remember when Yahoo! Answers was so badly moderated that people asked stupid bait questions intentionally?<p>Quora has managed to successfully leverage AI industrialize the process. I won't miss them as much as Yahho Answers (which already isn't much).
Quora has always been trash. There's a reason why we append 'reddit' to a search query if we want a human opinion and not 'quora'.
The "Quora ruined Quora" narrative was also recently featured, in Feb:<p><a href="https://slate.com/technology/2024/02/quora-what-happened-ai-decline.html" rel="nofollow">https://slate.com/technology/2024/02/quora-what-happened-ai-...</a><p><a href="https://news.ycombinator.com/item?id=39241448">https://news.ycombinator.com/item?id=39241448</a>
I cannot remember a time when quora was not broken, so AI certainly didn't ruin it.<p>The UI and their login popups have always been horrible enough that I try to avoid them. But in addition to that, most specific questions - the ones that would benefit the most from a nuanced human answer - get answered with generic copypasta. My impression is that Quota tried gamifying the process of answering... And they succeeded, to their detriment. Now people keep answering questions that they have no clue about just to make their presumed importance score go up. It's a bit like those grifters that order a ghost-written book in their name so that they can already claim authority before they learn a new skill.
Reading the author's explanation of how Quora became awful by using AI to generate questions gives me a view into the dystopian future where the deluge of AI-generated books on Amazon pollutes and undercuts the whole concept of "knowledge found in books." Instead, we will have books with the verisimilitude of knowledge because the authors did not have the expertise or ambition to edit out the hallucinations.
Quora already started to decline years before LLMs becoming popular.<p>I think, it's not AI that ruined Quora, but monetization. It's a strong case of Goodhart's Law. The measure is popularity and user retention.<p>In an ideal world the correct target would be the quality of questions and answers.
The post shows limited understanding of what LLMs are. Ok, so Quora has a bad model that generates "Why doesn't Donald Trump shave his mustache?" type questions. That's Quora's problem.<p>But in general, a frontier LLMs like the ones trained by Open AI would not make such silly mistakes in their output. Its "fancy autocomplete" is trained on millions of documents that include detailed description of Trump's face and the presence of the word "Trump" in the context window would activate countless weights that relate to "orange hair", "pussy grabbing", "bribes" etc. Each word or sequence of tokens can be understood as a vector in a space of say, 12288 different dimensions, and for each round in the algorithm the entire context is refined so that each of those dimensions would access finer and finer details of the subject matter present in the training data.<p>So the model really does "know" Trump does not have a mustache. What it lacks (or has very limited ability to do) is rational exploration of the information it holds. It's like an idiot savant with the mental ability of a 4 year old that has somehow memorized the entire information that exists on the internet.
Founder of Quora Adam D'Angelo is still on the Open AI board board and the founder of another AI startup called POE. I am sure Quora's QnA data would have been used to train lot of AI models. Now they are just taking out the humans.
> Fully automated luxury mansplaining<p>It's surprising to see gendered, pejorative language thrown around in an article that has nothing to do with gender.
Before AI there were a lot of shills on the site who deliberately gave answers that had nothing to do with reality. AI just sped up the process.<p>And the answer the author of the article formulates has nothing to do with reality either.
Quora was already ruined and pretty much worthless for a very long time. When a query on google has quora, I hardly ever click on it as I know a) it won’t answer the question most likely b) I walk into this incredibly confusing ux experience.
It had too many Indians. Not that that's absolutely a bad thing, but it was bad for me because the answers tended to reflect Indian cultural perspectives and they weren't very relevant to me.
On a seperate note, why do people feel the need to exaggerate their Quora profile/education level/accomplishments? I'm not the only one who's noticed how many Quora profiles are filled with ludicrous, self-aggrandizing descriptions, right?
Going into the article, I was expecting that Quora used AI to <i>answer</i> questions. Instead it uses AI to ask questions. That's pathetic and fascinating at the same time.<p>It's pathetic because it shows there aren't enough people asking questions to keep the site alive, which means it's already dead and exists only in a kind of zombie fashion.<p>But it's fascinating because it begs the question, why are people taking time out of their busy day answering pointless questions devised by a bot that doesn't know anything (and that isn't even interested in the answer)?<p>Quora is now a content farm where contributors work for free. That's horrible... and, in a very practical way, genius.