TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

An Alphabet X concept from 2016 is an unsettling vision of social engineering

159 pointsby m1about 7 years ago

27 comments

pi-squaredabout 7 years ago
On the one side we want freedom of speech inside organizations so that they can discuss ideas like this. On the other side it&#x27;s dangerous if an idea like this leaks because it presumably shows how evil Google is.<p>Now imagine a leaker inside your own brain. Every thought you may have could be displayed on the front page of The Verge or whatever. &quot;Peter&#x27;s internal thoughts hint he may be a pedophile&quot;.<p>I prefer Google having these internal discussions of highly disturbing concepts vs not having them.<p>&quot;I do not agree with what you have to say, but I&#x27;ll defend to the death your right to say it.&quot;
评论 #17091337 未加载
评论 #17091731 未加载
评论 #17091971 未加载
评论 #17091516 未加载
评论 #17091590 未加载
评论 #17091447 未加载
评论 #17092668 未加载
评论 #17091902 未加载
评论 #17091733 未加载
评论 #17091233 未加载
评论 #17091778 未加载
评论 #17091804 未加载
alleyshackabout 7 years ago
As a former Googler, this video doesn&#x27;t surprise me at all. Google tends to have a bit of a ...culture bubble, I suppose, where the engineers and designers forget that not everyone is okay with having all their data harvested and used in whatever manner Google thinks is best. I suspect this is related to the fact that the vast majority of these engineers and designers are straight, cis, college-educated men who are white or Asian and between the ages of 18 and 35, a group which on average tends to have fewer reasons to be concerned about their data being used against them. Then add in the fact that most employees have bought into the general &quot;all-knowing, all-caring Google&quot; mindset, and you have a perfect recipe for the kinds of thoughts presented in the video.<p>The scary thing is, most of the people I worked with genuinely believed that ideas like this are good for the user. The fact that such things require extensive privacy invasions don&#x27;t even cross their minds, because they don&#x27;t think of it as a privacy invasion. It&#x27;s just another way to &quot;optimize&quot; toward some goal or another.
brudgersabout 7 years ago
One of my great grandmothers worked in a button factory as a child. My grandfather on that side was able to go to college because of The Cooper Union. It was explicitly non-lamarkian. As is all social mobility. Google&#x27;s vision literally pairs the idea that the options offered children will be bounded by the data trail of their parents with the image of a man working on a row boat. A literal interpretation -- the only kind computers make -- suggests that the artifacts this system will produce for the man&#x27;s children are more likely to be oars than iPads. That&#x27;s not to say this fetishization of Lamarkian genetics doesn&#x27;t make sense inside Google. The median salary for the people there is $200,000 and the system will probably cough up Hondas if not BMW&#x27;s. Google&#x27;s vision reduces to using computation to create and maintain a caste system. The image of the man in the boat is a deliberate design decision.
评论 #17091409 未加载
评论 #17092014 未加载
GuiAabout 7 years ago
Here are the concrete ideas presented in 6 mins of narrated video:<p>- a system could highlight choices aligned with the user’s values (examples: recommend taking Uber pool instead of Uber X in the Uber app, suggest locally grown bananas in a shopping app)<p>- a system could manufacture custom objects for a user to gather missing data about that user<p>And the broader, higher level concept:<p>- data aggregated about a user can be considered to be a “genome”, and perhaps concepts applicable to genomes (sequencing, ...) are similarly applicable<p>This whole sub field of “speculative design” feels particularly useless (this video is part of a broader novel practice, see for instance <a href="https:&#x2F;&#x2F;www.primerconference.com&#x2F;2017&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.primerconference.com&#x2F;2017&#x2F;</a>). A few very vague points are raised, with no direct way to probe the questions or start answering them. This is in contrast to for example the scientific approach, where the base hypothesis usually gives us a clue as to what we might want to measure, change, etc.<p>So sure, at the end of your multi week process you get a slick video, except you’re not much further down the line of inquiry (and if it gets leaked you have the whole internet turn on you).<p>If one were assigned to think about this topic, it seems like actually exploring the base hypothesis (“personal data can be thought of as a genome”) with real experiments designed to test the limits of that statement would be a much more productive use of time.
评论 #17092935 未加载
walterbellabout 7 years ago
One theoretical foundation for some of the “nudging” ideas:<p><a href="http:&#x2F;&#x2F;www.nybooks.com&#x2F;articles&#x2F;2014&#x2F;10&#x2F;09&#x2F;cass-sunstein-its-all-your-own-good&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.nybooks.com&#x2F;articles&#x2F;2014&#x2F;10&#x2F;09&#x2F;cass-sunstein-its...</a><p><i>&gt; there is very little awareness in these books about the problem of trust. Every day we are bombarded with offers whose choice architecture is manipulated, not necessarily in our favor. The latest deal from the phone company is designed to bamboozle us, and we may well want such blandishments regulated. But it is not clear whether the regulators themselves are trustworthy. Governments don’t just make mistakes; they sometimes set out deliberately to mislead us. The mendacity of elected officials is legendary and claims on our trust and credulity have often been squandered. It is against this background that we have to consider how nudging might be abused.<p>... Sunstein’s idea is that we who know better should manipulate the choice architecture so that those who are less likely to perceive what is good for them can be induced to choose the options that we have decided are in their best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”<p>... Deeper even than this is a prickly concern about dignity. What becomes of the self-respect we invest in our own willed actions, flawed and misguided though they often are, when so many of our choices are manipulated to promote what someone else sees (perhaps rightly) as our best interest? Sunstein is well aware that many will see the rigging of choice through nudges as an affront to human dignity: I mean dignity in the sense of self-respect, an individual’s awareness of her own worth as a chooser. The term “dignity” did not appear in the book he wrote with Thaler, but in </i>Why Nudge? <i>Sunstein concedes that this objection is “intensely felt.” Practically everything he says about it, however, is an attempt to brush dignity aside.</i>
评论 #17091838 未加载
_bxg1about 7 years ago
One of the greatest and most common fallacies I see Silicon Valley companies make on a regular basis, is assuming that human beings are purely rational entities. To be more specific, in many cases, they ignore mental health.<p>When applied to dispassionate entities this type of reasoning might make sense, but real people:<p>- Experience anxiety. Chilling effects, stress, etc. become abundant when we have no internal life; no room for our minds to move and experiment and be messy without being locked into step with the outside world.<p>- Are changed by observations we or others make about ourselves. Our minds are not static systems, they are infinitely recursive and dynamic. Observation of our own minds - even perfectly accurate observation - influences them. The cycle keeps going, ad infinitum.<p>The world is simpler when it&#x27;s homogenized. It&#x27;s easier to reason about. I sympathize; I&#x27;m a programmer myself. But these aren&#x27;t just approximation errors. These are ways in which recent technology actively damages the human psyche, both on an individual and a societal level. The Olympians of Silicon Valley are trying to shape the world in their own image, and they&#x27;ll burn it to the ground before they admit their fault.
dmayleabout 7 years ago
I think you could go back to the early seventies and make this same kind of video about gene sequencing, DNA, CRISPR, etc.<p>Just like we didn&#x27;t know to what extent DNA changes can alter an individual, and what the repercussions of making those changes are, the same is true of our actions and experiences.<p>This video is a look at a nascent field that requires thinking and ethical explorations.<p>What are the ramifications of an individual who, through modern science, has the ability to not just alter their genetic makeup, but also their experiences and behaviors so as to achieve a desired outcome?<p>Where is free will in all of this? Does it exist? Will we all become beholden to our former selves who, making decisions with less information, made decisions we no longer agree with?<p>What about our parents? They make choices, sending us to schools, or acting classes, but they shape us in the same way.<p>...and where is the limit? As long as we&#x27;re not very good at it (like right now), it&#x27;s ok, but once we&#x27;re accomplished a certain level of proficiency, is that when it becomes dangerous?<p>This is a thought-provoking video, but I think a lot of the issues expressed here are a reflection of the viewers, and not of the video itself. It even raises the concept of responsible behavior (it refers to &#x27;stewardship&#x27;).<p>Our current society values the concept of &#x27;humanity&#x27;, and the &#x27;greater good&#x27;, which gives me hope that we will take action and correct evolutionary unstable systems and behaviors.<p>To end on a joke: I, for one, welcome our new selfish ledger overlords.
cmiles74about 7 years ago
It&#x27;s like they&#x27;ve forgotten that they mostly sell ads. In the end, they won&#x27;t be training the human race to be more generous. They&#x27;ll be training them to buy more crap.
评论 #17091343 未加载
skummetmaelkabout 7 years ago
The video is describing mind control. Literally... I have no words.<p>First they want to offer users to select how they want to change their behaviour. I guess that&#x27;s fine, we all want to improve ourselves. This segment lasts a couple of seconds and the rest of the time is dedicated to describing how the system can be made to predict and target &quot;bad&quot; behaviours of humans as a species.<p>Who greenlit this.<p>EDIT: I understand this is not a product, but clearly it means Google is seriously thinking about doing such things. This is just as scary for me.
评论 #17091109 未加载
评论 #17091226 未加载
评论 #17091111 未加载
评论 #17091302 未加载
评论 #17091146 未加载
sqdbpsabout 7 years ago
When did tech reporting become the thought police and a guardian of the status quo?<p>They seem to react to anything more exciting than a new phone release with fear and scorn.<p>Exploring concepts and ideas is how progress happens and it&#x27;s telling how they see &quot;Duplex&quot; - the most impressive tech showcased at these keynotes - as a misstep.
评论 #17091544 未加载
评论 #17091683 未加载
评论 #17091557 未加载
jessriedelabout 7 years ago
Can anyone speak to whether this style of video is use widely internally at Alphabet X, or if this is just an anomaly? I&#x27;m not talking about the content, just the practice of sending around videos about ostensibly intellectual ideas that have very little content and instead rely on music&#x2F;imagery&#x2F;etc.? It&#x27;s very embarrassing.
评论 #17091425 未加载
评论 #17091653 未加载
joejerryronnieabout 7 years ago
This is the most terrifying dystopian thing I have ever seen (mostly because we&#x27;re not that far away from this becoming a reality). To think that just a couple decades from now, Google could literally be directing a majority of our waking actions based off a soulless machine learning algorithm. In this reality, do you have the ability to opt out or is the only solution violent revolution and the destruction of the machines?
PuffinBlueabout 7 years ago
Even though speculative - collective multi-generational Behavioural Sequencing (and influencing) seems like Psycho-history.<p>Sci-fi becoming possible dystopian reality yet again.
walterbellabout 7 years ago
Could we see the positive speculation video? There must have been one, right? Standard scenario planning technique.
AnsisMalinsabout 7 years ago
Good to know I&#x27;m not the only one to think of this: a phone app that asks what you want and then orders you around. And when you tick the online box, it optimizes over the population of all online users. For example, it might suggest places and times as to increase chances of serendipitous encounters.
sizzleabout 7 years ago
This is eerily similar to the concept of a &#x27;cookie&#x27; seen in the Black Mirror episode, White Christmas.<p>(<i>Spoiler Alert</i>)<p>A cookie is a device &quot;that is inserted under the clients head by the brain and kept there for a week, giving it time to accurately replicate the individuals consciousness. It is then removed and installed in a larger, egg shaped device which can be connected to a computer or tablet.&quot;<p>Highly recommend this episode for those interested: <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;White_Christmas_(Black_Mirror)" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;White_Christmas_(Black_Mirro...</a>
taneqabout 7 years ago
Another perfect illustration of the way that Google recasts &quot;your personal information is required to generate that outcome&quot; as &quot;you must surrender your personal information to us in order to benefit from that outcome.&quot;<p>Most of the functionality they describe is perfectly achievable without any personal data ever leaving your phone &#x2F; home computer &#x2F; personal server, but the quid-pro-quo of &quot;service in exchange for data&quot; is too deeply ingrained.
falcolasabout 7 years ago
This underscores to me why I have no fear of the Singularity. It would be impossible for an AI to do worse things to us than we as humanity are willing to do to ourselves.<p>We have tortured and maimed in the name of advancing medicine. We have killed millions in the name of saving more millions. We manipulate people&#x27;s thoughts and behaviors in order to make them &quot;better&quot;. We evaluate other people based on those we perceive as their peers.<p>AI only makes those processes easier.
评论 #17095285 未加载
xab9about 7 years ago
Did anyone think of Asimov&#x27;s Foundation? I doubt that we will ever have &quot;big enough data&quot; for such a scale, but the socio-genetical ledger is a concept that may lead toward ubiquitous informational systems.
Shankabout 7 years ago
With a lot of leaked internal videos, you get a lot of spin from whatever agency is reporting on it. With GDPR and Facebook&#x27;s data problems, a video like this is surely going to be reported on as a dystopian future and indicative of everything that&#x27;s wrong with Google.<p>But taken at face value -- just judging the video itself -- it doesn&#x27;t seem that bad. To the untrained eye it can appear horrible, but this is largely because people conflate all data into one group. There&#x27;s two different types of data: there&#x27;s the data that you create intentionally, as well as the data that&#x27;s the passive result of you being around technology. There&#x27;s data that, if shared, could be potentially deadly in some parts of the world (messages, photos, videos, calls, etc.) -- and there&#x27;s data that exists, but is not captured about people. This video is clearly showing more of the latter.<p>The canonical example the video uses is a <i>scale</i>. The idea is that the ledger thinks that it can make better decisions if it knows your weight. It&#x27;s not sure that you&#x27;d buy an existing smart scale, so it wants to create one that could fit the bill for capturing that data from you. This is passive data -- it already exists about you, but it isn&#x27;t collected. Of all of the genuine types of data collection, this is the most genuine! If you go to a doctor&#x27;s office, the first thing they do is weigh you and take your temperature, for good reason. It&#x27;s one of the biggest factors in health and treatment for a patient. It can give off warning signs or indicate more effective treatments.<p>This video isn&#x27;t about taking in your random personal data that you store on Google and using it nefariously. It&#x27;s about taking in data that you already have and trying to make actionable decisions based on it. The video characterizes the data as if it&#x27;s an independent living thing as an exercise -- not as the end grand outcome. The idea is simply that if you can track what users do and how they behave in certain situations, you can use those past decisions to help inform future generations. The video doesn&#x27;t even make this a mandatory thing -- it shows a user turning it on and using it to make specific goals (like eat healthier).<p>If Google wants to tackle a problem like depression through opt-in deep learning on habits, nudging people in the right way, then I don&#x27;t see a problem with it. If you could categorically learn how to avoid pitfalls and make things better for future generations, why wouldn&#x27;t you? It actually kinda gives every single life a little more meaning and purpose -- actually acting as inputs on how to better the human race.<p>Everyone wants to take things like this in the worst possible scenarios. &quot;Google is evil or wants to sell ads, so they&#x27;re going to build a system for being evil and selling ads.&quot; But look at the facts: there&#x27;s a lot of fear over not a lot of time tested results. Google search is really, really good. Waymo cars really don&#x27;t crash that much. They do a lot of projects for the &quot;greater good,&quot; and haven&#x27;t been historically known to take advantage of the data they collect.<p>They&#x27;re pitching this internal video to their employees as inspiration to build a better quality of life for future generations. They aren&#x27;t pitching it to, uh, data mine everyone for their own profit. If ethics is about what you do when nobody is looking, then this is a good example of consistent ethics. When Google says publicly they care, and then privately says they care, I think it&#x27;s safe to say they genuinely do care.
otakucodeabout 7 years ago
This fits perfectly with the views expressed by Eric &#x27;If you don&#x27;t have anything to hide you shouldn&#x27;t worry&#x27; Schmidt in his book &#x27;The New Digital Age.&#x27; Google is rich. Therefore, Google is Better. Because they are Better, they need to reign in society and guide it for its own protection and benefit. It&#x27;s really a very old school mindset, the kind that ruled the world for a very long time in the ages of kings, god-kings, theocracies, etc. It&#x27;s the idea that some people are fundamentally born to lead and others born to follow. This is precisely and exactly the viewpoint that &quot;all men are created equal&quot; was penned to spit in the face of.
dwighttkabout 7 years ago
tangent: videos like this are so irritating... Just give me the transcript, possibly with pictures&#x2F;video interleaved if necessary (but probably fewer than you think).
评论 #17091973 未加载
评论 #17091311 未加载
aonerabout 7 years ago
I feel like Yuval Noah Harari is spot on with dataism emerging as the new dominant inter-subjective truth. Go and read Homo Deus, you will not be disappointed. The idea of dataism will slowly replace our current humanistic ideas of liberty&#x2F;individualism. I&#x27;m not saying I agree with the video&#x2F;general direction, but I think we&#x27;re foolish to disregard this idea as an isolated event produced in a &quot;culture bubble&quot;.
jusujusuabout 7 years ago
Great music!
hprotagonistabout 7 years ago
<i>Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be “cured” against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.</i><p>C. S. Lewis, 1948
评论 #17091091 未加载
评论 #17091118 未加载
s_kilkabout 7 years ago
&gt; The title is an homage to Richard Dawkins’ 1976 book The Selfish Gene.<p>Another instance of techies being led astray by flimsy, reactionary pseudo-science.
评论 #17091993 未加载
dmeadabout 7 years ago
This is getting gross. I really fail to understand the type of person that believes this is a good idea.
评论 #17091119 未加载