TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Why do smartphones take 16MB photos if 16MB of information is not there?

5 pointsby computatorover 3 years ago
I find that most photos from an iPhone can be scaled down to ~400KB in an image editor and be indistinguishable from the original. There is no extra information in the excess data that the camera stores. What's going on here? Why doesn't JPEG compress away the excess 15MB automatically at the moment the photo is taken? Why isn't there an option in the UI to shoot at a lower resolution? Assuming that the excess 15MB is noise, why doesn't the camera itself automatically decide the optimum resolution?

5 comments

mikewarotover 3 years ago
There <i>can</i> be up to the full bit density of the raw file worth of information present at the output of the camera chip. If you were to aim the camera at an even high resolution, high contrast ratio screen, containing error corrected data displayed as pixels, you could use it to input up to 50% of that amount, assuming losses for error correction, noise, etc.<p>What a lossy compression algorithm is doing is to decide how unexpected a give bit of data is, assuming a certain model of image interpretation. There were decades of work thrown at it back when 512x512 pixels was high resolution, to create mathematical models of human vision that most closely match how people actually see the differences between images. This lets a computer decide how much changing a given pixel would effect how much a person would notice it being different. Doing this in an efficient and effective manner was not easy.<p>The least surprising bits get thrown away, in such a way that the process can be reversed to recreate the original image as closely as possible given the error budget and constraints of the encoding method, and the vision model used.<p>There camera sensor itself is optimized for doing one job... capturing light and converting that to a voltage. There are dedicated processors elsewhere that handle JPEG and other compression methods.<p>There is always an adjustment of quality available with lossy compression, everyone has their own preference for how much of the picture quality they are willing to lose, which is why there is usually a slider or preset somewhere you can adjust.<p>Professional photographers who are in high-stakes shoots record everything the sensor sees, without loss, in a RAW file. The incremental cost of the extra storage is far less than risking quality in these situations. Often a JPEG will be made at the same time, with the same file numbering to make it quicker to go through photos at the first sorting.<p>Steve Jobs could have picked some arbitrary value of compression, but that wouldn&#x27;t have made it right.
PaulHouleover 3 years ago
The trouble with noise is that you can&#x27;t tell the difference between noise and signal. No matter what, noise hurts the compressability of images and audio.<p>JPEG in particular has the problem that a constant quality setting doesn&#x27;t get constant quality result. You might look at one photo and decide that quality 40 is good enough for a particular use. Some other image might require quality 55 to be good enough.<p>Thus you can&#x27;t automatically compress images with JPEG on a large scale and know you&#x27;ll be happy with the results. I compressed a million images years ago and regretted it because many of them were overcompressed.<p>Newer image formats have ways to specify perceptual quality that come much closer to &quot;set and forget&quot;
RicoElectricoover 3 years ago
The difference in size you mention is significant, but typically there indeed is a safety margin for image resolution, called Kell factor [1]<p>To examine &quot;true&quot; resolution of an image one could look at its autocorrelation or Fourier spectrum [2]<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kell_factor" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kell_factor</a><p>[2] <a href="https:&#x2F;&#x2F;photo.stackexchange.com&#x2F;questions&#x2F;107911&#x2F;how-to-determine-the-actual-or-true-resolution-of-a-digital-photograph" rel="nofollow">https:&#x2F;&#x2F;photo.stackexchange.com&#x2F;questions&#x2F;107911&#x2F;how-to-dete...</a>
toast0over 3 years ago
Specification wars. Image sensors of the same size and quality tend to take about the same quality pictures, provided they have sufficient pixels (or whatever they&#x27;re called). Larger sensors mean larger lenses and in general more space, so that is expensive to design in. Higher quality sensors are hard to show in objective terms, so that&#x27;s hard to market. Pixel count is an objective number and easy to compare, so it&#x27;s a marketting win to have 20% more pixels than the other phone, even if that&#x27;s worse.
gvbover 3 years ago
&gt; I find that most photos from an iPhone can be scaled down to ~400KB in an image editor and be indistinguishable from the original.<p>What do you mean by &quot;scaled down?&quot; Scaled down implies reduced size. Are you reducing the size (fewer pixels) or increasing the compression with the same effective size (same number of pixels)?<p>How do you judge that it is indistinguishable from the original? If you zoom in on the original vs. the scaled down image I expect you <i>will</i> see a difference.
评论 #29107939 未加载