Summary: $25M for projects in a wide array of categories. Grants are $500K to $2M for one to three years of deliverables.<p>Application of questions can be found here: <a href="https://ai.google/static/documents/impact-challenge-application.pdf" rel="nofollow">https://ai.google/static/documents/impact-challenge-applicat...</a>
This is weird.<p>1. Why does it need AI? Why not just fund stuff that do social good? Instead of giving out computing credits that will eventually run out.<p>2. The successful projects joins a startup accelerator. Wtf?<p>This guys really lost track of what charity means.
This is awesome! "crowdsourcing" AI tech is a pretty smart business move, esp considering it's wrapped with "social good". This initiative can bring a lot of talented and sentimental minds together, and who knows.. it can possibly put a start to a next google product! And if nothing comes out of it.. who cares? Still a marketing win (assuming this gains some traction). I expected nothing less from the idle minds at this corp.
Is this anything like "don't be evil"? I want to try to not be cynical. But so much of ethical concern, especially regarding privacy, has come out of the Google corner in the past few years that the "for social good" part instantly makes me paranoid about what it will really eventually be used for.<p>And the story [1] about Google patenting a person's work after an interview comes to mind.<p>Having got that off my chest, hopefully the participants read the legal terms very carefully and might even consider having a lawyer review them.<p>[1]
<a href="https://patentpandas.org/stories/company-patented-my-idea" rel="nofollow">https://patentpandas.org/stories/company-patented-my-idea</a>
I hate the term "AI for social good", because it reminds me that the default use of AI today is actually far from being a social good. I wish "AI for social good" was not a thing, and default use cases of AI was for good, social or otherwise.
Excerpted from their principles[1]:<p>[...] we will not design or deploy AI in the following application areas:<p>1. that cause or are likely to cause overall harm [...]
2. [...] Weapons [...]
3. [...] that gather or use information for surveillance [...]
4. [...] whose purpose contravenes widely accepted principles of international law and human rights [...]<p>[...] As our experience in this space deepens, this list may evolve. [...]<p>The last sentence gave me a crack, it was definitely generated by some DeepMind-AI called DeepSarcasm.<p>[1] <a href="https://ai.google/principles/" rel="nofollow">https://ai.google/principles/</a>
As a small social enterprise with ambitions to use data to improve the work of local health/activity/wellbeing charities, the offer of help appeals to us so we're applying. I get the various arguments here but on balance, what would we achieve by denying their help.
The negativity here is saddening. So what if it is for PR? How many other companies are doing these things even for that? Isn't it eventually promoting AI projects that have at least some social good in their objectives?
AI for social good is one of those things that are kind of absurd in it's very premise.<p>AI is not a thing we program to then deliver abstract terms like social good. It's a thing we program to do specific things which might then end up being used to do social good.