TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Perspective API – An API that makes it easier to host better conversations

114 pointsby flinnerover 8 years ago

22 comments

StavrosKover 8 years ago
I took the slider to the left end, and it was a lot of climate change denial. I thought &quot;ugh, is this going to opine on left vs right ideology? That seems Orwellian&quot;, and dragged the slider to the right end, where, to my surprise, all the comments were insulting&#x2F;useless.<p>It&#x27;s pleasing to know that it doesn&#x27;t care about your opinion, just about how eloquently it&#x27;s expressed. Sounds very useful.
评论 #13715073 未加载
评论 #13714493 未加载
archgroveover 8 years ago
It seems to really hate profanity. `I love it` is 1% toxic. `I bloody love it.` 38%; `I f<i></i>king love it` 95%. In many circles, the more profane are more congratulatory.
评论 #13715234 未加载
iandanforthover 8 years ago
&quot;The author of the previous comment has a simian countenance which displays a lineage rich in species diversity&quot; - 2% toxic. So insults are OK as long as they are from Watterson. I approve!
startupdiscussover 8 years ago
Great tool, by the way. I think it might actually work!<p>I started off with a highly toxic comment (in the window on the tool, not in &quot;real life&quot;) and I tried to be just as insulting while lowering the toxicity level.<p>It was informative that when I did this, the sentence sounded more educated, more polished, but was just as rude.<p>If this spreads wide, I suspect it will usher in a new era of veiled insults and implied disfavor, but that will be a vast improvement over what we have today.
评论 #13715217 未加载
评论 #13718114 未加载
elcapitanover 8 years ago
To me that creates the question why I would read the comments in the first place. Because &quot;filter to the left&quot; contains not one post that would interest me. It also doesn&#x27;t give me a feeling on relevant discrepancies in opinion.<p>If I could filter I would love to filter out everybody in the &quot;filter completely to the left&quot; as well as the &quot;completely to the right&quot; spectrum and just have the ones in the middle. The ones on the left are insanely boring and conformist, and the ones on the complete right are really just idiots.
jkapturover 8 years ago
Very interesting, though I&#x27;d love more details about the signals they&#x27;re using.<p>I hold out a slim hope that this discussion doesn&#x27;t once again devolve into &quot;but how can we even define &#x27;truth&#x27;??&quot; This seems more analogous to spam detectors - perfection is absolutely theoretically impossible, but low-error implementations are incredibly useful in practice.
评论 #13717392 未加载
mr_lucover 8 years ago
I remember (from building a sentiment analysis irc bot[1] back in the day that used the afinn wordlist) that sentiment analysis is effective because robots can do mid-70%-accurate classification, but humans only agree around 80% of the time on &#x27;positive&#x2F;negative&#x27; classification.<p>So I always wonder, if simple models like the afinn wordlist work at close-to-human levels, how much total value is added by the more robust model. Still very cool!<p>---<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mrluc&#x2F;cku-irc-bots&#x2F;blob&#x2F;master&#x2F;feelio.coffee" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mrluc&#x2F;cku-irc-bots&#x2F;blob&#x2F;master&#x2F;feelio.cof...</a> - included the following line of code, which is a coffeescript crime but also a cute sentence:<p><pre><code> if i_feel_you then @maybe @occasionally, =&gt; say i_feel_you</code></pre>
elcapitanover 8 years ago
&quot;Disobedience, in the eyes of any one who has read history, is man&#x27;s original virtue. It is through disobedience that progress has been made, through disobedience and through rebellion.&quot;<p>Oscar Wilde, being 14% &quot;toxic&quot;.<p>--<p>&quot;Politeness , n. The most acceptable hypocrisy.&quot;<p>Ambrose Bierce, being 48% &quot;toxic&quot;
评论 #13715651 未加载
geocarover 8 years ago
I tried two phrases:<p><pre><code> 59% You&#x27;re a potato. </code></pre> and:<p><pre><code> 60% You&#x27;re a potato </code></pre> Just losing the period makes it more toxic! Wow.<p>Now what can I do with this?
评论 #13715176 未加载
评论 #13717553 未加载
评论 #13714960 未加载
ImTalkingover 8 years ago
You can have all the technology in the world but eventually what censorship really means is someone else stopping you from speaking your mind, and that person can be benign or malignant.<p>&quot;host better conversations&quot; is a nice little marketing jingle but what it really means is &quot;host conversations closer to what you consider &#x27;better&#x27;&quot;. And &#x27;better&#x27; is in the eye of the beholder.
underyxover 8 years ago
I&#x27;m getting &lt;10% toxicity ratings with sarcastic, mocking comments. Those ones will be a bit hard to fight off, it seems.
评论 #13714426 未加载
评论 #13714811 未加载
RubyPinchover 8 years ago
&gt; We are also open sourcing experiments, models, and research data to explore the strengths and weaknesses (e.g. potential unintended biases) of using machine learning as a tool for online discussion.<p>Does that mean that practically everything to be able to run it on a server without any google API is possible (once those things release)?
qzncover 8 years ago
Context: NYT will use it <a href="http:&#x2F;&#x2F;www.nytco.com&#x2F;the-times-is-partnering-with-jigsaw-to-expand-comment-capabilities&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.nytco.com&#x2F;the-times-is-partnering-with-jigsaw-to-...</a>
EJTHover 8 years ago
I just tried some different terms in the test bed on perspectiveapi.com and so far I noticed that &#x27;Podesta&#x27; is a very toxic term, the same as &#x27;Pizzagate&#x27; and &#x27;Skippy&#x27; for some reason.<p>Kind of strange. Seems like this is more a tool for shaping the narrative than it is a tool for keeping a discussion sober.
评论 #13723930 未加载
sciurusover 8 years ago
This reminds me of the work Crystal is doing to coach you to write in the style most appropriate for your audience.<p><a href="https:&#x2F;&#x2F;www.crystalknows.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.crystalknows.com&#x2F;</a>
spdustinover 8 years ago
Seems like lemmatizing the text before it&#x27;s rated by humans (during the training of the model, that is) would get around a healthy portion of the grammatically-induced scoring differences.
calpatersonover 8 years ago
Detecting (and warning commenters) about toxicity seems like a really useful idea. I would certainly like to browse many Brexit discussions with the top 40% of toxic comments cut out.
评论 #13714622 未加载
elizabethaneraover 8 years ago
In addition to being a basic term match, it also would mark any conversation about controversial subjects as toxic. Try saying something innocuous about suicide or rape.
kakarotover 8 years ago
This thing does not like exclamation points. I&#x27;m all for being less toxic and more positive in my language, but sometimes I get really excited about it!!!
kornakiewiczover 8 years ago
Sentiment analysis, basically?
dralover 8 years ago
I wonder what the results are on HN comments.
bawanaover 8 years ago
&#x27;You have a nice behind&#x27; scores a lot differently than &#x27;you have a nice ass&#x27;. Hmmmm.