With a lot of leaked internal videos, you get a lot of spin from whatever agency is reporting on it. With GDPR and Facebook's data problems, a video like this is surely going to be reported on as a dystopian future and indicative of everything that's wrong with Google.<p>But taken at face value -- just judging the video itself -- it doesn't seem that bad. To the untrained eye it can appear horrible, but this is largely because people conflate all data into one group. There's two different types of data: there's the data that you create intentionally, as well as the data that's the passive result of you being around technology. There's data that, if shared, could be potentially deadly in some parts of the world (messages, photos, videos, calls, etc.) -- and there's data that exists, but is not captured about people. This video is clearly showing more of the latter.<p>The canonical example the video uses is a <i>scale</i>. The idea is that the ledger thinks that it can make better decisions if it knows your weight. It's not sure that you'd buy an existing smart scale, so it wants to create one that could fit the bill for capturing that data from you. This is passive data -- it already exists about you, but it isn't collected. Of all of the genuine types of data collection, this is the most genuine! If you go to a doctor's office, the first thing they do is weigh you and take your temperature, for good reason. It's one of the biggest factors in health and treatment for a patient. It can give off warning signs or indicate more effective treatments.<p>This video isn't about taking in your random personal data that you store on Google and using it nefariously. It's about taking in data that you already have and trying to make actionable decisions based on it. The video characterizes the data as if it's an independent living thing as an exercise -- not as the end grand outcome. The idea is simply that if you can track what users do and how they behave in certain situations, you can use those past decisions to help inform future generations. The video doesn't even make this a mandatory thing -- it shows a user turning it on and using it to make specific goals (like eat healthier).<p>If Google wants to tackle a problem like depression through opt-in deep learning on habits, nudging people in the right way, then I don't see a problem with it. If you could categorically learn how to avoid pitfalls and make things better for future generations, why wouldn't you? It actually kinda gives every single life a little more meaning and purpose -- actually acting as inputs on how to better the human race.<p>Everyone wants to take things like this in the worst possible scenarios. "Google is evil or wants to sell ads, so they're going to build a system for being evil and selling ads." But look at the facts: there's a lot of fear over not a lot of time tested results. Google search is really, really good. Waymo cars really don't crash that much. They do a lot of projects for the "greater good," and haven't been historically known to take advantage of the data they collect.<p>They're pitching this internal video to their employees as inspiration to build a better quality of life for future generations. They aren't pitching it to, uh, data mine everyone for their own profit. If ethics is about what you do when nobody is looking, then this is a good example of consistent ethics. When Google says publicly they care, and then privately says they care, I think it's safe to say they genuinely do care.