Anyone knows why "fastMRI" [0] would not suffer from problems like this, especially if there is something on the images that has not been on any images it was trained on (e.g. foreign matter)? Enhancing faces going wrong is one thing, getting medical images wrong a whole another.<p>[0] <i>"fastMRI is a collaborative research project between Facebook AI Research (FAIR) and NYU Langone Health. The aim is to investigate the use of AI to make MRI scans up to 10 times faster. By producing accurate images from under-sampled data, AI image reconstruction has the potential to improve the patient’s experience and to make MRIs accessible for more people."</i>, see <a href="https://fastmri.org" rel="nofollow">https://fastmri.org</a>
This is a nightmare if it ever gets used in trials. "We took this low-res picture from the burglary, used our high-tech Artificial Intelligence to enhance it, and now it looks just like you!"
If you could take a Bayesian perspective toward the super-resolution problem, things will make sense: given a low-res image, it corresponds to a distribution of corresponding high-res images. Which one is more likely? It depends on the prior and the likelihood. The right figure is a possible outcome, however, if we have strong prior toward the possibility of well-known people, we would be biased toward those people. It's not wrong, it is just not comprehensive.
I'm not sure why shitty AI always gets used as evidence against the field. We typically dont see shitty software and say, well all software must be shitty then.
ML systems are biased when data is biased. This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics. Train the <i>exact</i> same system on a dataset from Senegal, and everyone will look African.