TE
TechEcho
StartseiteTop 24hNeuesteBesteFragenZeigenJobs
GitHubTwitter
Startseite

TechEcho

Eine mit Next.js erstellte Technologie-Nachrichtenplattform, die globale Technologienachrichten und Diskussionen bietet.

GitHubTwitter

Startseite

StartseiteNeuesteBesteFragenZeigenJobs

Ressourcen

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. Alle Rechte vorbehalten.

Avoiding AI is hard – but our freedom to opt out must be protected

238 Punktevon gnabgibvor 1 Tag

18 comments

Bjartrvor 1 Tag
&gt; Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it<p>The article says this like it&#x27;s a new problem. Automated resume screening is a long established practice at this point. That it&#x27;ll be some LLM doing the screening instead of a keyword matcher doesn&#x27;t change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.<p>It&#x27;s not like companies take responsibility for such automated systems today. I think they&#x27;re used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it&#x27;s a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.
评论 #43961277 未加载
评论 #43959301 未加载
评论 #43962263 未加载
评论 #43961108 未加载
belochvor 1 Tag
&gt;* &quot;AI decision making also needs to be more transparent. Whether it’s automated hiring, healthcare or financial services, AI should be understandable, accountable and open to scrutiny.&quot;<p>You can&#x27;t simply look at a LLM&#x27;s code and determine if, for example, it has racial biases. This is very similar to a human. You can&#x27;t look inside someone&#x27;s brain to see if they&#x27;re racist. You can only respond to what they do.<p>If a human does something unethical or criminal, companies take steps to counter that behaviour which may include removing the human from their position. If an AI is found to be doing something wrong, one company might choose to patch it or replace it with something else, but will other companies do the same? Will they even be alerted to the problem? One human can only do so much harm. The harm a faulty AI can do potentially scales to the size of their install base.<p>Perhaps, in this sense, AI&#x27;s need to be treated like humans while accounting for scale. If an AI does something unethical&#x2F;criminal, it should be &quot;recalled&quot;. i.e. Taken off the job <i>everywhere</i> until it can be demonstrated the behaviour has been corrected. It is not acceptable for a company, when alerted to a problem with an AI they&#x27;re using, to say, &quot;Well, it hasn&#x27;t done anything wrong <i>here</i> yet.&quot;
评论 #43960625 未加载
评论 #43961137 未加载
lackervor 1 Tag
I think most people who want to &quot;opt out of AI&quot; don&#x27;t actually understand where AI is used. Every Google search uses AI, even the ones that don&#x27;t show an &quot;AI panel&quot; at the top. Every iOS spellcheck uses AI. Every time you send an email or make a non-cryptocurrency electronic payment, you&#x27;re relying on an AI that verifies that your transaction is legitimate.<p>I imagine the author would respond, &quot;That&#x27;s not what I mean!&quot; Well, they should figure out what they actually mean.
评论 #43958626 未加载
评论 #43958962 未加载
评论 #43958539 未加载
评论 #43958850 未加载
评论 #43961163 未加载
hedoravor 1 Tag
Maybe people will <i>finally</i> realize that allowing companies to gather private information without permission is a bad idea, and should be banned. Such information is already used against everyone multiple times a day.<p>On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!<p>This tradeoff has basically nothing to do with recent advances in AI though.<p>Also, with the current performance trends in LLMs, we seem very close to being able to run models locally. That’ll blow up a lot of the most abusive business models in this space.<p>On a related note, if AI decreases the number of mistakes my doctor makes, that seems like a win to me.<p>If the AI then sold my medical file (or used it in some other revenue generating way), that’d be unethical and wrong.<p>Current health care systems already do that without permission and it’s legal. Fix that problem instead.
评论 #43959427 未加载
评论 #43961217 未加载
评论 #43959694 未加载
roxolotlvor 1 Tag
Reminds me of the wonderful Onion piece about a Google Opt Out Village. <a href="https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=lMChO0qNbkY" rel="nofollow">https:&#x2F;&#x2F;m.youtube.com&#x2F;watch?v=lMChO0qNbkY</a><p>I appreciate the frustration that, if not quite yet, it’ll be near impossible to live a normal life without having exposure to GenAI systems. Of course as others say here, and the date on the Onion piece shows, it’s not sadly not a new concern.
yoko888vor 1 Tag
I’ve been thinking about what it really means to say no in an age where everything says yes for us.<p>AI doesn’t arrive like a storm. It seeps in, feature by feature, until we no longer notice we’ve stopped choosing. And that’s why the freedom to opt out matters — not because we always want to use it, but because knowing we can is part of what keeps us human.<p>I don’t fear AI. But I do fear a world where silence is interpreted as consent, and presence means surrender by default.
评论 #43959803 未加载
tim333vor 1 Tag
The trouble with his examples of doctors or employers using AI is it&#x27;s not really about him opting out, it&#x27;s about forcing others, the doctors and employers, not to use AI which will be tricky.
simonwvor etwa 18 Stunden
There&#x27;s a larger context here which is really interesting: as far as I can tell (and I&#x27;d love to hear confirmation from people more credible than me) the way LLMs and other models train on unlicensed data is NOT legal under current UK copyright law.<p>The UK government and trying to make it legal, presumably for concern over staying competitive in this rapidly growing space.<p>Baroness Kidron, mentioned in this story, is the leading figure in UK parliament who is pushing back against this.
djoldmanvor 1 Tag
&gt; Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it. Or imagine visiting a doctor where treatment options are chosen by a machine you can’t question.<p>I wonder when&#x2F;if the opposite will be as much of an article hook:<p>&quot;Imagine applying for a job, only to find out that a human rejected your resume before an algorithm powered by artificial intelligence (AI) even saw it. Or imagine visiting a doctor where treatment options are chosen by a human you can’t question.&quot;<p>The implicit assumption is that it&#x27;s preferred that humans do the work. In the first case, probably most would assume an AI is... ruthless? biased? Both exist for humans too. Not that the current state of AI resume processing is necessarily &quot;good&quot;.<p>In the second, I don&#x27;t understand as no competent licensed doctor <i>chooses</i> the treatment options (absent an emergency); they presumably <i>know</i> the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.
评论 #43958650 未加载
评论 #43958724 未加载
评论 #43958640 未加载
ilterisvor etwa 16 Stunden
Do you think our freedom to opt out from electricity protected?
daft_pinkvor 1 Tag
i’m not sure it’s just code. it’s just an algorithm similar to any other algorithm. i’m not sure that you can opt out of algorithms.
bamboozledvor etwa 24 Stunden
It&#x27;s absolutely never going to happen...there I said it.
mianosvor 1 Tag
Using a poem from 1897 to illustrate why AI will be out of control? The web site name is very accurate. That&#x27;s sure to start a conversation.
评论 #43959172 未加载
lokarvor 1 Tag
They include no functional definition of what counts as AI.<p>Without that the whole thing is just noise
caseyyvor 1 Tag
You can’t outlaw being an asshole. You can’t outlaw being belligerent. And you can’t outlaw being a belligerent asshole with AI. There isn’t a question of “should we”. We have no means, as things stand.<p>Our intellectual property, privacy, and consumer protection laws were all tested by LLM tech, and they failed the test. Same as with social media — with proof it has caused genocides and suicides, and common sense saying it’s responsible for an epidemic of anxiety and depression, we have failed to stop its unethical advance.<p>The only wining move is to not play the game and go offline. Hope you weren’t looking to date, socialize, bank, get a ride, order food at restaurants, and do other things, because that has all moved online and is behind a cookie warning saying “We Care About Your Privacy” and listing 1899 ad partners the service will tell your behavioral measurements to for future behavior manipulation. Don’t worry, it’s “legitimate interest”. Then it will send an email to your inbox that will do the same, and it will have a tracking pixel so a mailing list company can get a piece of that action.<p>We are debating what parts of the torment nexus should or shouldn’t be allowed, while being tormented from every direction. It’s actually getting very ridiculous how too little too late it is. But I don’t think humanity has a spine to say enough is enough. There are large parts of humanity that like and justify their own abuse, too. They would kiss the ground their abusers walk on.<p>It is the end stage of corporate neo-liberalism. Something that could have worked out very well in theory if we didn’t become mindless fanatics[0] of it. Maybe with a little bit more hustle we can seed, scale and monetize ethics and morals. Then with a great IPO and an AI-first strategy, we could grow golden virtue retention in the short and long-run…<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33668502">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33668502</a>
JimDabellvor 1 Tag
Article 22 of the GDPR already addresses this, you have the right to human intervention.
Nasrudithvor 1 Tag
This seems like one of those &#x27;my personal neurosis deserve to be treated like a societal problem&#x27; articles. I&#x27;ve seen the exact same sort of thing when complaining about inability to opt out of being advertised to.
Imnimovor 1 Tag
This is going to end with me having to click another GDPR-style banner on every website, isn&#x27;t it?