This is great and all, but we still run into the problem with political biases embedded in the source data [0]<p>Musk’s AI’s aim is to get to the truth, not eliminate biases retroactively. I think that’s a noble goal, politics aside.<p>I agree with him that teaching an AI to lie is a dangerous path. Currently it’s probably not exactly akin to lying, but it’s close enough to be on that path.<p>We should find a way to feed source material from all “biases” if you will, and have it produce what’s closest to reality. It’s obviously easier said than done, but I don’t think the AI Czar VP Harris aims to do this.<p>If we’re too divided or hellbent on pushing our own agenda, it’ll be a bad outcome for all.<p>Unfortunately the differences we have are at a very fundamental level that really is a question of how reality is perceived, and what we consider meaningful. The difference of if something by its nature has meaning, or if we give meaning to it culturally/societally.<p>The former is a more “conservative” (personality wise, not political) view.<p>The later is more of, “everything that has meaning is based off the meaning we say it has, thus we can ascribe the level of meaning to that or other things as we wish”. The idea that many things are social constructs, and we can change those as we wish to craft what we’d like to see.<p>I’m probably doing a poor job of wording it, but this fundamental difference in perception is going to very quickly be at the forefront of AI ethics.<p>[0] <a href="https://en.m.wikipedia.org/wiki/Ideological_bias_on_Wikipedia#:~:text=The%20authors%20found%20that%20%22Wikipedia,about%20immigration%20trended%20toward%20Republican" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Ideological_bias_on_Wikipedi...</a>