TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

“Every time there is discussion on Real AI enhance images I remember this image”

47 pointsby yoeloover 3 years ago

5 comments

_Microftover 3 years ago
Anyone knows why &quot;fastMRI&quot; [0] would not suffer from problems like this, especially if there is something on the images that has not been on any images it was trained on (e.g. foreign matter)? Enhancing faces going wrong is one thing, getting medical images wrong a whole another.<p>[0] <i>&quot;fastMRI is a collaborative research project between Facebook AI Research (FAIR) and NYU Langone Health. The aim is to investigate the use of AI to make MRI scans up to 10 times faster. By producing accurate images from under-sampled data, AI image reconstruction has the potential to improve the patient’s experience and to make MRIs accessible for more people.&quot;</i>, see <a href="https:&#x2F;&#x2F;fastmri.org" rel="nofollow">https:&#x2F;&#x2F;fastmri.org</a>
评论 #28385125 未加载
评论 #28383919 未加载
thehappypmover 3 years ago
This is a nightmare if it ever gets used in trials. &quot;We took this low-res picture from the burglary, used our high-tech Artificial Intelligence to enhance it, and now it looks just like you!&quot;
评论 #28389363 未加载
lambdamoreover 3 years ago
If you could take a Bayesian perspective toward the super-resolution problem, things will make sense: given a low-res image, it corresponds to a distribution of corresponding high-res images. Which one is more likely? It depends on the prior and the likelihood. The right figure is a possible outcome, however, if we have strong prior toward the possibility of well-known people, we would be biased toward those people. It&#x27;s not wrong, it is just not comprehensive.
评论 #28383403 未加载
jjconover 3 years ago
I&#x27;m not sure why shitty AI always gets used as evidence against the field. We typically dont see shitty software and say, well all software must be shitty then.
评论 #28384278 未加载
908B64B197over 3 years ago
ML systems are biased when data is biased. This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics. Train the <i>exact</i> same system on a dataset from Senegal, and everyone will look African.
评论 #28385069 未加载
评论 #28382619 未加载