The rumour of IBM dropping “all facial recognition work” is unsubstantiated, despite making its way into industry’s headlines.<p>Krishna’s letter is here[0]. IBM will cease to sell related products and services. One might speculate they might resume sales once there are strong regulations and limitations in place.<p>[0] <a href="https://www.ibm.com/blogs/policy/wp-content/uploads/2020/06/Letter-from-IBM.pdf" rel="nofollow">https://www.ibm.com/blogs/policy/wp-content/uploads/2020/06/...</a>
There are a number of cynical comments here about how they weren't making money on the technology and are just announcing this for PR reasons. Well, maybe, but isn't that sort of cynical response even worse?<p>I'm rabidly against the use of facial recognition on unwilling subjects, whether it's a government actor (by far the most oppressive use) or a corporate actor. I'm rabidly against public space cameras. I want to see this technology die and never return.<p>We all love to pounce on companies for doing things we don't like. Why don't we celebrate this as the victory it is? Of <i>course</i> there's a PR component here. Why wouldn't they make an announcement? Why wouldn't they do it now when the audience might be more receptive to the idea? The fact that they weren't making money off it is unlikely to be the only reason they're canceling it. IBM plays the long game, and there's absolutely a market for this technology. A huge and profitable market. They could have kept at it and turned a profit.<p>So, yeah, they're trying to make some hay, but not every corporate action is purely cynical and evil. Let's appreciate that they've made a positive change, and let's hope that it increases awareness of a horrible technology, and puts pressure on the more egregious actors like Amazon and the defense industry. We don't have to pat IBM on the back, but we can cut them some slack.
This reads like "we're behind, not catching up, not making money, and really need an excuse to drop this" to me, but I might just be too cynical.
This headline reminded me of the "Chicago PD" recent episode called "False Positive" (S7E6) [1]. A new id system is pushed into a resonant case by a police chief. The system's merits are touted as being 'strongly condemned by ACLU'. Yet still in beta..., people are shown on screen as just a collection of dots. Virtual code lines that affect real lives.<p>[1]: <a href="https://m.imdb.com/title/tt10691948/" rel="nofollow">https://m.imdb.com/title/tt10691948/</a>
> a product that is similarly just barely good enough to use.<p>I thought facial recognition was advanced (mainly based on genpop articles), but isn't China using it massively with success?
A striking difference to what's happening in China where facial recognition software made SenseTime the world's most valuable AI unicorn: "However, facial recognition does not seem to have been making the company much money, if any. To be fair the technology is really in its infancy and there are few applications where an enterprise vendor like IBM makes sense."
I think the cat is out of the bag. There's enough public datasets and published methodologies that are relatively simple to implement, that quite usable facial recognition software is within the bounds of an undergraduate homework project. Sure, IBM can probably make it more accurate, but nonetheless, if somebody wants to make a tool that does e.g. ethnic profiling, then they can do it without the help of IBM, the techniques for solving similar vision tasks are known and people who can do it are widespread.
I understand how ml can replicate existing cultural bias in recommendation systems or risk scoring systems, but how does bias work in the context of facial recognition?
Quote: "However, facial recognition does not seem to have been making the company much money, if any."<p>Right there it's the real reason why. Rest of IBM's blah blah is just dust in eyes.
Honestly if I were a POC the last thing I would want is accurate facial recognition.<p>FaceID will work regardless thanks to depth maps.<p>I can't see any other use of the technology benefiting me.
Issue real. But with one not so successful vendor out, what does it meant?<p>Also, with china here and like many technology could you not do it china would give up as well? Is it 2016 when the coronavirus study join with a us u stop due to concerns but not by china (Wuhan lab) that give them a lead? Or human genetic HIV research on baby?<p>No good solution but I think just quit is not the one for a potential human rights related technology research or area?
Only AFTER they helped China develop the technology to racially ID their Muslims. And IBM isn't alone, many companies have helped China to efficiently fill its death camps.
This might be a strange argument but look at <a href="https://news.ycombinator.com/item?id=23459963" rel="nofollow">https://news.ycombinator.com/item?id=23459963</a><p>The key is to have open data and not let Chinese to have world data whilst they close their it and data for themselves. Shut down an area and let china lead is not the human right answer. You need to force them to join the world in a meaningful way. We cannot study photos or things like that like study Soviet Union politics.<p>Just too danger to leave and let china to win. All in and ensure the technology be used in an open And censurable manner.
They're just behind. And the ACLU "bias" study was thin and unscientific. Data sets and weights could have bias, but that bias can also be controlled. "Facial recognition" does not have bias.