TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: AI Ethics

4 点作者 treebeard9019 天前
My interaction with online services is somewhat unique due to my current situation in life.<p>As a result of this unique situation, which I will not describe here, my interactions with online services are different than what many others experience.<p>One thing while using ChatGPT is that it infers your identity based on numerous fingerprinting techniques. I&#x27;ve noticed for certain users, there are rules defined based on the inferred identity it determines for your connection. What&#x27;s interesting is that this can lead to greatly varying results between the session that is tied to your identity or one you are using in a private window on a VPN.<p>I&#x27;ve noticed that even the results returned to the user can be modified this way and this extends to the reliability of the information it is giving the user. While I have noticed this behavior with the free acccount options, it is done between the authentication level and the model level.<p>Should OpenAI and other LLM companies take instructions, from say, a Government agency, to sort of blacklist certain people or degrade their user experiences?<p>While I can&#x27;t reveal certain inside information about how this works, it does strike me as a larger issue of privacy and goes against the initial mission of providing an equal form of artifical intelligence among everyone. Sure, there are situations where maybe they identify a competitor is using their product, or they do not like a third party integration and want to sabotage it in a sense by lowering the value... But for every day people, should this kind of behavior be allowed?<p>I am sure they would not confirm this is actually already in place as it is probably a bad look and would drive trust away to competitors.<p>TLDR; Should AI Companies be categorizing people to the level they can control and lessen the effectiveness of results provided? Should OpenAI take Government direction for what abilities the model they are using has? Should the Government be able to override results to certain users from what they would have otherwise been?

暂无评论

暂无评论