<i>"I bemoaned that humanity seems to be serving technology rather than the other way around. I argued that tech corporations have become too powerful and their power must be curtailed."</i><p>That's a generic problem with corporatism and monopoly, not "tech".<p>It shows up in "tech" because "tech" scales so well and has such strong network effects. But the US's tolerance of monopoly is the real cause. There need to be about four major players before markets push prices down. The US has three big banks, two big drugstore chains, etc.<p>Tough antitrust enforcement would help. Google should be broken up into Search, Browsers, Mobile Devices, Ads, and Services, and the units prohibited from contracting with each other.<p>Tough labor law enforcement would help. No more "gig worker" jobs that are exempt from labor law.
No more "wage shaving". No more unpaid overtime. Prorate medical insurance payments based on hours, so companies that won't pay people for more than 30 hours a week pay their fraction of medical insurance.
A minimum wage high enough that people making it don't need food stamps.
> All of us must navigate the trade-off between “me” and “we.” A famous Talmudic quote states: “If I am not for myself, who will be for me? If I am only for myself, what am I?” We must balance optimizing for oneself with optimizing for others, including the public good... To take an extreme example, Big Tobacco surely does not support the public good, and most of us would agree that it is unethical to work for Big Tobacco. The question, thus, is whether Big Tech is supporting the public good, and if not, what should Big Tech workers do about it.<p>The duty to align your professional life ethically scales with your ability to do so. I personally don't cast aspersions on anyone working in tobacco farms or in a gas station selling cigarettes; they're just trying to get by. But if you're one or two levels up Maslow's Pyramid, it's right to weigh your personal needs against the impact of your work. You'll also be better off for it, knowing that the world would be worse off if you decided to switch gears and become a carpenter/baker/bartender/choose your adventure.<p>I'll also say: there are ways to contribute morally outside of your 9-5. Volunteer to teach a neighborhood kid to code. Show your local sandwich shop how to set their hours online, or maybe even build them a cookie cutter Squarespace site. Donate a small fraction of your salary (eg 0.5% local, 0.5% global) to causes you believe in, and scale up over the years.
Our intelligence agencies have long recognized that individuals burdened by debt are vulnerable to coercion and manipulation. It's time we acknowledge that the H1-B visa program creates a similar dynamic. The program’s restrictive rules effectively hang over visa holders like the sword of Damocles, leaving them perpetually at risk and easily controlled.<p>We’ve already seen how Twitter, under Musk’s leadership, has exploited this system to erode user protections in favor of appeasing his ego. When such moral compromises are normalized at the top, their effects inevitably cascade downward, influencing broader organizational norms and behaviors.
I worked in Big Tech and it changed my life financially so I can’t judge anyone else for doing it, but I will say that I had a moral reckoning while I was there and I am (right now) unwilling to go back.<p>At the time (2012-2022) the things about the business model that bothered me were surveillance culture, excessive advertising, and monopoly power. Internally I was also horrified at the abuse of “vendor/contractor” status to maintain a shadow workforce which did a lot of valuable work while receiving almost none of the financial benefits that the full-time workforce received.<p>3 years later all of those concerns remain but for me they’re a distant second behind the rise of AI. There’s a non-zero chance that AI is one of the most destructive creations in human history and I just can’t allow myself to help advance it until I’m convinced that chance is much closer to zero. I’m in the minority I know, so the best case scenario for me is that I’m wrong and everyone getting rich on AI right now has gotten rich for bringing us something good, not our doom.
From the article:<p>> <i>But I have yet, until now, to point at the elephant in the room and ask whether it is ethical to work for Big Tech, taking all of the above into consideration.</i><p>People often highlight "boycotting" as the most effective action an individual can take to drive change, but for those who work in tech, the most powerful message you can send is <i>denying your labor</i>.<p>To me, this isn’t even about whether "Big Tech" companies are ethical; it’s a matter of ideological principle. FAANG companies already wield far too much power, and I refuse to contribute to that imbalance.
Not only do I disagree with the premise, but I think the article is poorly argued.<p>Was working on the Manhattan project unethical because it furnished the ability for us to kill humans on an even more vast industrial scale than we previously could have imagined? Perhaps, but it's hard to square this with the reality that the capability of mutually assured destruction has ushered in the longest period of relative peace and global stability in recorded history, during a period of time we might otherwise expect dramatically increased conflict and strife (because we are sharing our limited planet with an additional order of magnitude of humans). Had everyone at Los Alamos boycotted the effort, would we be in a better place when some other power inevitably invented the atomic bomb? Somehow I doubt it.<p>The world is a complex system. While there are hopefully an expanding set of core "values" that we collectively believe in, any single person is going to be challenged by conflicting values at times. This is like the Kagan stages of psychological development [1], but societally. I can believe that it's net bad for society that someone is working on a cigarette manufacturing line, without personally holding them accountable for the ills that are downstream of their work. There are competing systems (family, society) that place competing values (good - we can afford to live, bad - other people get sick and die) on the exact same work.<p>If people want to boycott some types of work, more power to them, but I don't think the line between "ethical" and "unethical" tasks is so clear that you can put whole corporations on one side or another of that line.<p>Sometimes I try and put a dollar amount on how much value I have received from Google in my lifetime. I've used their products for at least 20 years. Tens of thousands of dollars seems like an accurate estimate. I'm happy to recognize that two things are true: that there are societal problems with some big tech businesses that we would collectively benefit from solving AND that I (and millions of other people less fortunate than me, that couldn't "afford" the non-ad-supported cost of these services) have benefited tremendously from the existence of Google and its ilk.<p>[1]: <a href="https://imgur.com/a/LSkzutj" rel="nofollow">https://imgur.com/a/LSkzutj</a>
How do I reason ethically about this?<p>I am a security professional. My work directly affects the security of the systems I am responsible for. If I do my job well, people’s data is less likely to be stolen, leaked, intentionally corrupted, or held for ransom. I also influence privacy related decisions.<p>I work for a Mag7 company. The company has many divisions; the division I work for doesn’t seem to be doing anything that I would perceive as unethical, but other divisions of my company do behave in a way I consider unethical.<p>I’m not afraid to take an ethical stance; in a previous job at another company I have directly confronted my management chain about questionable behavior and threatened to quit (I ended up convincing them my position was correct).<p>So how do I reason about that? Really the sticking point is that large companies are not monoliths. Am I acting unethically for working for an ethical division of an imperfect company?
I dunno. I’m kind of with the sentiment in his original column, or at least how he paraphrased it. I think it’s naive to believe that you can bring about any real change in tech through moral suasion alone. The monetary payoffs are too large and there are too many people who will work for them no matter what. If you want to change some behavior that you find immoral, your best bet is to organize politically and pass laws.
'Never attribute to ethics that which can be explained by incentives' - Hanlon's Hammer<p>'Show me an organization's stupidity and I'll show you their malice' - Munger's Psychology of Human Misquotations
> But the belief in the magical power of the free market always to serve the public good has no theoretical basis.f In fact, our current climate crisis is a demonstrated market failure.<p>It's not a free market failure. It's an example of the <i>Tragedy of the Commons</i>.<p><a href="https://en.wikipedia.org/wiki/Tragedy_of_the_commons" rel="nofollow">https://en.wikipedia.org/wiki/Tragedy_of_the_commons</a>
Technology scales lots of things. Including business models based on conflict of interest.<p>There were corporate conflicts of interest with respect to customers long before technology. But it took tech to create a community scrapbook, tied to a mass surveillance, psychology hacking and coercion flywheel, powered by hundreds of billions of dollars from third parties, and motivated by trillions of dollars of potential market cap.<p>Something has to give. Society is eventually going to have to come to terms with the fact that minor incidents of poor behavior, when scaled up with technology, can significantly degrade society.<p>Many things are illegal now, that in the far past would never have been considered a problem. Scale matters. And Internet tech scales.
Thank you! Great to see this message getting a bit of a platform.<p>As industry practicioners, we have the agency to force positive change in our field. If the government is too encumbered and the executives are too avaricious, that leaves us. If you want tech to do good things for people, work for a company that makes tech that does good things for people.
One ethical thing that some people on HN do, and more should: criticize big companies when they do something unethical, even if you'd want to work for them.<p>Yes, presumably, you will get on some company-wide hiring denylists. (Not because you're prominent, but because there will be routine LLM-powered "corporate fit" checks, against massive corpora and streams of ongoing surveillance capitalism monitoring of most things being said.)<p>Some things need to be said. And people need to not just hear it once, and forget it, but to hear it from many people, on an ongoing basis. So not saying it is being complicit.
> We must balance optimizing for oneself with optimizing for others, including the public good. So how does working for Big Tech thread this needle? This is the question that people who work for Big Tech must ask themselves.<p>This is a bullshit premise. Many people who worked at Google when I was there (including myself) sincerely believed that Google was good for society.<p>People sitting on the outside have an incorrect mental model of how people work at companies like this.<p>A very very small minority work there and think the company is evil. The ones who think that do not last long because it’s insufferable working with people drinking different koolaid. The same thing is true for working for Wall Street, defense contractors, drug companies, and whatever else you can think of.<p>If it’s a company that defines and leads the space, it’s likely filled with motivated employees that already think the company is doing the right thing.<p>So there is no ethical quandary of “what is good for me vs what is good for society” because the employee thinks he/she is doing good for society by working there.<p>> Uber skirted regulations, shrugged off safety issues, and presided over a workplace rife with sexual harassment.” Was it ethical to have worked at Uber under Kalanick?<p>This is the false dichotomy that doesn’t apply to people who drink the koolaid. If you think Uber has saved thousands of lives via reduced drunk driving and available rides out of bad areas, disrupting/ignoring local regulations is easy to justify. A leader who had sexual harassment issues is completely irrelevant because of “the mission”.<p>Implying that someone is unethical to be at Uber while that was going on makes as much sense as implying someone is unethical for being a research professor at Harvard when others there have published fraudulent papers at the same time.
Regarding his ask that ACM dedicate itself to the public good, the IEEE is already there in its code of ethics.<p>> hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, to protect the privacy of others, and to disclose promptly factors that might endanger the public or the environment<p>That code is pretty squarely at odds with big tech's latest malevolent aims.
> the belief in the magical power of the free market always to serve the public good has no theoretical basis<p>This needs to be repeated more often.<p>Early on, there was this idea that free market capitalism was inherently amoral, and we had to do things like "vote with your wallet" to enforce some kind of morality on the system. This has been gradually replaced with a pseudo-religious idea that there's some inherent "virtue" to capitalism. You just need to have faith in the system, and everything will magically work itself out.
It's good to see someone have the humility to lay out their strong previous position and then renounce it in plain language. So much public communication these days consists of hedging, back-pedaling, and other forms of strategic disavowal.
I really like Timothy Snyder's take on this.<p>Breakthroughs in information technology always cause disruption in the political meaning (wars and chaos). It was like that when writing was invented (making big organized religions possible), it was the same with printing press (allowing reformation and big political movements), it was similar with radio (which allowed 20-th century style totalitarian regimes).<p>Each time the legacy powers struggled to survive and wars started. It took some time for the societies to adapt and regulate the new technologies and create a new stable equilibrium.<p>It's not surprising that it's the same with internet. We have unstable wild-west style information oligarchy forming before our eyes. The moguls build continent-spanning empires. There's no regulation, the costs are negligible, and the only ones trying to control it are the authoritarians. And the new oligarchs are obviously fighting with their thought-control powers against the regulation with all they've got.<p>It won't end without fireworks.
I didn't think the Cambridge Analytica scandal had anything at all to do with computer science. I thought it had to do with business and hence business ethics.
I work on software for managing casinos. I feel morally superior. Big tech has real problems if working with gambling and weaponry is preferable to big tech.
“In a sort of ghastly simplicity we remove the organ and demand the function. We make men without chests and expect of them virtue and enterprise. We laugh at honour and are shocked to find traitors in our midst. We castrate and bid the geldings be fruitful.”
> It is difficult to get a man to understand something, when his salary depends on his not understanding it.<p>I'm going to have to add that my list of favorite aphorisms. And it's not just salaries that drive this dynamic. It is difficult to get someone to understand something when their entire identity is invested in not understanding it. This applies to religions, political ideologies, and even to a lot of self-styled rationalism.
"But in my January 2019 Communications column,^b I dismissed the ethical-crisis vibe. I wrote, "If society finds the surveillance business model offensive, then the remedy is public policy, in the form of laws and regulations, rather than an ethics outrage." I now think, however, I was wrong."
The "ethics crisis", as described here, is the complaining of one ruling elite (traditional media, universities, bureaucrats, etc.) against another upcoming elite (tech). The problem is that all of the power is accruing to tech — at the expense of the competing, traditional elites.<p>An even bigger problem is that most of the economic and social benefits have come from technology. This even includes shorter work weeks and paid leave (typically falsely credited to unions) and greater disposable income, which have come from technology (broadly speaking) and not from activism.<p>A tech "ethics crisis" and the "dangerous" profit motive are just renewed attacks against capitalism, and "tech" is itself just the tip of the spear of capitalism (and the cultural nom de guerre of capitalism's elites).
Big Tech subverted the world’s longest running democracy and tipped a majority of the global population into authoritarian rule. An essay handwringing the question doesn’t seem very useful at this point.
> Is Big Tech supporting the public good, and if not, what should Big Tech workers do about it?<p>The problem is not if Big Tech does support or does not support something. The problem is they have any opinion at all! The pitch is they are "platforms" and "arbiters" who decide like highest court. They should not have any opinions at all!<p>All this oligopoly needs to be dissolved!
> In fact, our current climate crisis is a demonstrated market failure.<p>This wrong on so many levels. There is neither a climate crisis nor a market failure. If any, central economies exhibited (and exhibit) higher levels of pollution and destruction of public good.<p>Mindless repetition of the climate crisis trope has done more damage to the cause than carbon emissions.
I agree with the author that questions of ethics are social optimization problems.<p>> We must balance optimizing for oneself with optimizing for others<p>Yet if each person would optimize for themself, then the balancing is automatically taken care of. The invisible hand is even more free and dexterous on the social scale than the economic.<p>> the belief in the magical power of the free market always to serve the public good has no theoretical basis. In fact, our current climate crisis is a demonstrated market failure.<p>The power of the free market is at least as theoretically and empirically sound as the climate crisis.
The person is arguing whether it is good or bad to work for Big Tech. I wouldn't hate the players when you really should hate the game. Most of the populace is largely unaware of surveillance or why they are using products that have a negative influence. Stopping to work for Big Tech does not change this lack of education. Advocating that privacy should be respected or even supporting laws that would regulate large technology companies has been attempted to be implemented by the ACLU and EFF, but it isn’t really practical when you can hire lobbyists for around $1 million dollars, which you can use to get passed nearly anything you want. Also, Big Tech may need fewer people to achieve its goals, so I think this post is too little and too late.
And he doesn't even get around to mentioning that Google (and Amazon) are providing AI computing to Israel even though Google's own lawyer warned that they could be used to violate human rights. Their lawyers wrote: "Google Cloud services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank.”<p>It gets worse, they got advice and then didn't follow it:<p>"Google reportedly sought input from consultants including the firm Business for Social Responsibility (BSR). Consultants apparently recommended that the contract bar the sale and use of its AI tools to the Israeli military 'and other sensitive customers,' the report says. Ultimately, the [Google] contract reportedly didn’t reflect those recommendations."<p><a href="https://www.theverge.com/2024/12/3/24311951/google-project-nimbus-internal-documents" rel="nofollow">https://www.theverge.com/2024/12/3/24311951/google-project-n...</a><p>The end result is Lavender which HRW details here: <a href="https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza" rel="nofollow">https://www.hrw.org/news/2024/09/10/questions-and-answers-is...</a>