The project is very interesting, but I'm not sure why they are doing it. How is that protecting user rights? This doesn't measure AI/ML progress that's available to state actors.
Part of what we are learning, in AI/ML, is the number of problems where the ML is relevant and where it is not. (Or, to put it another way, where we can find statistical relevance between features and targets).<p>So I think you need some kind of "meta-metric" that measures the growth of the taxonomy itself. And perhaps some kind of weighting for the impact of the solution.<p>There is also an interaction effect (for instance, Natural Language Processing is powerful, and "common sense reasoning" is powerful, but put them together and you have a knockout), but I don't know how to go about measuring that.
Who is, or is seeking to be, the canonical source of truth for AI, ML etc policy/position/etc and why should/how do we trust them?<p>In twenty years, what body will be directing the policies and laws regulations etc WRT to how humanity deals with essentially what is another "sentient" species?<p>Edit.. just read this and apparently this is exactly what the EFF is attempting to do...<p>But the question still remains: how do we trust these policies, how do we request/reject them?<p>I don't want to deal with this the same way the legal system is currently set up, lawyers and the law is flawed in many respects and I don't think it's a good idea to map the old to the new and uncharted directly.<p>(Apologies for the clunky language/terms.. please educate me on how to speak of this if you know)
For ease of reading/UI, I think they should use a different color for the bars that represent "human score" and those that represent "excellent performance" - so that someone doesn't skim and assume they are the same thing, given that currently both are represented as red dotted lines but mean two different things.
I believe the time has come for an independent institute to track AI and Big Data technology applications and begin creating guidelines for both industry self-certification and regulation. The other paths available, ignoring it, fearmongering about it, and trying to fit it into other political movements, do not seem to me to be heading towards an acceptable outcome.<p>It has to be a technology-heavy group, otherwise it won't create much value. It also has to be grounded in history, philosophy, and political science, otherwise it'll just be reactionary. And we have enough reactionary groups already.
Nice use of a shared Jupyter Notebook for data gathering. <a href="https://www.eff.org/ai/metrics" rel="nofollow">https://www.eff.org/ai/metrics</a>