TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Open Source AI Projects Need a Code of Ethics

3 pointsby sytsealmost 2 years ago

2 comments

scrum-treatsalmost 2 years ago
... as in a Judeo-Christian code of ethics, or something different? Asking on behalf of everyone in the US who is nervously awaiting AirForce AI meets Christian Mingle.<p>As for the need of a code of ethics I agree, however ethics gets tricky fast. What are the minimum number of required characteristics&#x2F;features to be tracked as &quot;the ethics formula&quot;? Can this be agreed upon? What bias does this minimal model of ethics introduce, because we&#x27;ve left out certain parameters? How does validation and benchmarking work (e.g., who owns this)? Are we committed to independent group consensus? How long before lobbyists take hold?
andy99almost 2 years ago
Open source isn&#x27;t about &quot;ethics&quot; and if fact is antithetical to trying to enforce your world view on others, which is the opposite of software freedom.<p>More pragmatically, I&#x27;d suggest anyone who thinks attaching ethical requirements to shared software read this article by open source lawyer Kate Downing who explains how pointless it is: <a href="https:&#x2F;&#x2F;katedowninglaw.com&#x2F;2023&#x2F;07&#x2F;13&#x2F;ai-licensing-cant-balance-open-with-responsible&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;katedowninglaw.com&#x2F;2023&#x2F;07&#x2F;13&#x2F;ai-licensing-cant-bala...</a> To quote (it&#x27;s a long article, this is nowhere near her full argument):<p><pre><code> To put this another way, no license on earth is going to stop the Chinese Communist Party from using a model from Hugging Face to maintain biometric identification on its citizens, if that’s what it wants to do. If a terrorist organization wants to use the models to plan future attacks against Americans, maybe there’s no local court that cares to stop them from doing so. What exactly is the sort of model that’s ok for North Korea to use however it wants, but whose use will require extraordinary levels of surveillance in the Western world? Are the licensors essentially arming hostile governments and terrorist organizations and only retaining control over law-abiding people? Do the licensors think it doesn’t matter if existing totalitarian regimes entrench their power over their helpless citizens even further so long as the licenses allow them to keep such practices at bay in the Western world? Is their technology not actually dangerous enough to warrant all these use restrictions? </code></pre> To quote Heather Meeker, another open source lawyer (et al):<p><pre><code> ... we do not believe that a definition of Open Weights needs to import subjects such as privacy, human rights, or clearance of data inputs into its licensing principles. We know those are important topics, but they will take time to figure out. We are focused instead on the original idea of openness, and preserving the original goals of Freedom Zero of free software and open source.</code></pre> <a href="https:&#x2F;&#x2F;github.com&#x2F;Open-Weights&#x2F;Definition">https:&#x2F;&#x2F;github.com&#x2F;Open-Weights&#x2F;Definition</a><p>Muddying the definitions and goals of free software with subjective ethics is going to be extremely bad for the community and will accomplish none of the goals its proponents are after.