TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Unbounded High Dynamic Range Photography Using a Modulo Camera

140 点作者 dnt404-1将近 10 年前

22 条评论

dr_zoidberg将近 10 年前
&gt; No more will photographers or even ordinary people have to fumble with aperture size and exposure length.<p>This is a &quot;computer scientists&quot; understanding of photography, and this phrase alone can even be seen as &quot;dangerous&quot; by photographers. There&#x27;s more to aperture size and exposure length than &quot;how much light reaches the sensor&quot;, like focus, depth of field, motion blur and bokeh, to name a few, that is not aknowledged by that understanding. The technology is good, but it won&#x27;t change the way photographers deal with photography, at best, it will give them a better tool to work with.<p>I also wonder how much impact this could have. Foveon Inc. had developed a great sensor that was going to be revolutionary, about 20 years ago. Today only Sigma uses it, and the UX is so bad in those cameras that the (very good) technology they had their hands on never got a chance to shine.<p>Also, is this implementation very different from having a 16-bit ADC in normal sensors? The end results would be the same: higher dynamic range. And the increase in bits seems a lot closer (because it would mean little changes to the current manufacture processes) than a whole new type of sensor being use in cameras.
评论 #10054903 未加载
评论 #10054545 未加载
评论 #10054163 未加载
评论 #10054184 未加载
评论 #10054160 未加载
评论 #10054036 未加载
darkmighty将近 10 年前
The title here is terrible because over-saturation is an effect of poor tone mapping, which is inevitable if you want to display an HDR image in a low dynamic range display. An accurate title would be the end of over-<i>exposed</i> images, but why not just keep the original?
评论 #10054069 未加载
评论 #10054409 未加载
评论 #10055380 未加载
soylentcola将近 10 年前
Looks to be more related to exposure than saturation but still, it&#x27;s an interesting new take on dynamic range. I&#x27;m more a hobbyist than an expert but current tone mapping techniques (at least anything mostly automated) can often leave you with &quot;halos&quot; around objects where the software has feathered the edges between lighter and darker areas. Then the article mentions the issues with using multiple shots for tone mapping.<p>I still try to just expose &quot;properly&quot; for the effect I want and shoot RAW to give me a little leeway in tweaking shadows and highlights but I&#x27;d be interested in trying out software that uses this technique if only for something new to mess with.
评论 #10054133 未加载
slr555将近 10 年前
I am probably less technical than most of you but the upside of this technology (at the point it is real world ready) might be producing what I would call a RAW file that is RAW(ER). By that I mean the file would simply contain a wider range of scene information that could be accessed by Adobe Camera Raw or the like. For the class of photographers that have no interest in RAW the information would help the JPEG engine in the camera make a &quot;better&quot; informed JPEG (or whatever the standard is at that time). To me capturing more scene info is better than less assuming there are minimal penalties being paid elsewhere (file size, burst shooting etc).<p>As a side note, it seems to me that aperture and shutter speed would still be essential as they affect not only exposure but depth of field and subject motion. Perhaps a technique that allows proper exposure independent of fstop&#x2F;shutter&#x2F;ASA would allow exploration of extremely wide aperture with slow shutter but no need for ND filters.<p>In any event it will be interesting to see what happens.
Mithaldu将近 10 年前
What&#x27;s with the completely useless thumbnail at the bottom?<p>That said, the technological explanation sounds simple and solid. Why aren&#x27;t cameras doing that already?
评论 #10053854 未加载
评论 #10054093 未加载
baldeagle将近 10 年前
It seems like this would make big glass even more important. This solution protects for over exposure, but doesn&#x27;t address under exposure on a scene that moves (think indoor bar photography). So it isn&#x27;t a end all solution that will everyone capture reality with their cell phones, but it is an awesome step in that direction.
kefka将近 10 年前
I had an idea similar to this that solved the oversaturation issue.<p>Use 2 cameras. Have them at right angles to the shutter point, separated a few inches. At the T junction where the camera pointing would cross with the shutter area, put a prism there.<p>The prism would give each camera 1&#x2F;2 of the available photons. Now use any standard HDR processing technique.
评论 #10054491 未加载
评论 #10056959 未加载
评论 #10054370 未加载
jlarocco将近 10 年前
I wonder how this compares to just adding a few extra bits to the sensor data. The paper (<a href="http:&#x2F;&#x2F;web.media.mit.edu&#x2F;~hangzhao&#x2F;papers&#x2F;moduloUHDR.pdf" rel="nofollow">http:&#x2F;&#x2F;web.media.mit.edu&#x2F;~hangzhao&#x2F;papers&#x2F;moduloUHDR.pdf</a>) compares against an &quot;8-bit intensity camera&quot;, but most cameras today use 12-16 bits internally. Isn&#x27;t &quot;this pixel overflowed&quot; essentially just an extra bit of information?<p>The pictures using the &quot;modulo camera&quot; are definitely better quality, but don&#x27;t look noticeably better than what I could do with a RAW from my camera and 2 seconds in Capture One or Lightroom adjusting the shadows and highlights.
salimmadjd将近 10 年前
As a photographer who at times needed to deal with HDR this is great news. In the past, I had often wondered why some scheme like this wasn&#x27;t possible. However, the title of this post is horrible. I understand they were trying to market it in a way it was accessible to most readers. That said, photography has become quiet popular and most people now understand HDR. So they could have just presented as true HDR with only a single frame image.
oppositelock将近 10 年前
I would love to feed a photo of an overexposed zebra to their reconstruction algorithm and see what happens. I think it will be spectacular!
dharma1将近 10 年前
Read about something like this on <a href="http:&#x2F;&#x2F;image-sensors-world.blogspot.co.uk" rel="nofollow">http:&#x2F;&#x2F;image-sensors-world.blogspot.co.uk</a> a while ago - instead of saturating the CMOS when max brightness is exceeded, it just keeps going.<p>Looking forward to seeing this commercialised.
IshKebab将近 10 年前
This is a great idea. It would be interesting to see the failure modes though because the phase-unwrapping surely can&#x27;t handle every situation (e.g. adding a constant value &gt;255 to the whole image).
megrimlock将近 10 年前
Anyone know what they mean by an &quot;inverse modulo&quot; algorithm? That only seems meaningful for numbers coprime to the sensor ceiling value.
Tekker将近 10 年前
&quot;Beginning of the end for over-saturated images&quot; ?<p>Well, then, say goodbye to the Disney channel, the most oversaturated channel out there.
paulmd将近 10 年前
Here&#x27;s an idea, instead of separately tracking the number of &quot;resets&quot; we just add those bits on to the left hand side of the measurement. We could call it... having more bits in the ADC. Expose for the highlights and then pull more detail out of the shadows with your extra bits of ADC precision.<p>There are not 1 million ADC units in a 1-megapixel camera, there&#x27;s a few ADCs (a Canon 7D has 2 image processors with 4 ADCs each) that are iterated across the CCD sites to progressively read them out. To make this new sensor, you need to cram 50 million sets (50mp is state of the art) of voltage comparator&#x2F;charge reset&#x2F;digital counter circuits onto the CCD, and for N bits of rollover accuracy they need to be able to trigger, erase the charge, and resume exposure at least N times during the exposure time E (which is say 1&#x2F;500th of a second, since they&#x27;re complaining about movement during multiple HDR exposures). Practically speaking the trigger&#x2F;operation&#x2F;resume period must be significantly less than E&#x2F;N since I see no way to retain the exposure during the period when the charge well is being drained to zero. If this time is non-trivial, that translates to losing the fine bits of your ADC accuracy.<p>In their image they compare a 13-stop exposure (the current state of the art) to an 8-stop exposure (state of the art in 1900). So assuming they didn&#x27;t just pull a photoshop out of their ass, they are overall claiming a 5 stop increase from state of the art. That&#x27;s 5 extra bits of recovery (1 stop is double the range, i.e. an extra bit), so they think they can trigger the reset circuit 5 times during an exposure. That implies this circuit must have a minimum cycle time of 1&#x2F;2500th of a second (practically speaking a lot less to give time for the actual exposure). With this math I&#x27;m also assuming that the comparator is perfect and doesn&#x27;t lose any accuracy - variance in trigger threshold or trigger time translates to losing some of your accuracy again. They also need to generate little enough waste heat to avoid hot pixels (this is a major reason the ADC is on the image processor rather than the CCD), and you need 50 million sets of these on the CCD.<p>If you can do it then go for it. It&#x27;s a nice idea on paper but I think there&#x27;s a lot of physical obstacles to overcome. If it were that easy someone would have done it already. It&#x27;s especially difficult given that reading out the sensor is <i>destructive</i> - measuring it wipes the charge, so I&#x27;m not sure how any of this would work at all given that.<p>Now what would actually be interesting is to apply the image-processing techniques to dual-DR imaging, as they mention in their &quot;related works&quot; section. The open-source Magic Lantern firmware for Canon DSLRs allows you to scan alternating rows at different ISOs, so effectively you can capture a lot more dynamic range at the expense of your vertical resolution. It&#x27;s all volunteers and they probably haven&#x27;t applied all the fancy image-processing magic with the convolutions and the hippity-hop rap music. How about reconstructing those overexposed lines instead? Or working on some of the sensors with physical implementations of dual-DR?
评论 #10055450 未加载
jcr将近 10 年前
The original title of the article is, &quot;Unbounded High Dynamic Range Photography Using a Modulo Camera&quot;
vernie将近 10 年前
Can somebody explain how this design differs from existing self-resetting pixel designs?
评论 #10056123 未加载
jchrisa将近 10 年前
Reminds me of the implementation of the TRNG from yesterday&#x27;s front page.
brador将近 10 年前
This was the runner up prize winning paper, what won?
评论 #10054892 未加载
iliis将近 10 年前
A related, but quite different technology I worked with are so-called Dynamic Vision Sensors (DVS, [1]): This is a CCD that was developed to mimic a biological retina and only reports <i>changes</i> in brightness, asynchronously for every pixel. Essentialy, you get a stream of short messages saying &quot;Pixel XY just got a little brighter at time T&quot;.<p>While such a thing is not very useful for taking pictures (pointing the camera at a static scene will generate no output at all) or videos (you don&#x27;t get discrete frames, you get a continuous stream of differentiated pixels), it has a lot of potential for computer-vision tasks:<p>For one, as the camera only reports changes, only the interesting information is transmitted which greatly reduces computational load (eg. no need for &quot;yes, that white wall is still there&quot;-style calculations for every frame).<p>Secondly, because these changes are reported independently per pixel, they are very fast: Microsecond accuracy with a dozen or so uS delay are easily achievable. For comparison, a fast camera with 120 FPS (which will produce <i>a lot</i> more data to process) has an accuracy and latency of 1&#x2F;120s = 8333uS. This also implicates that motion blur is basically nonexistent.<p>And thirdly, as absolute intensity doesn&#x27;t matter, you don&#x27;t have any problems with high dynamic range either.<p>The only downside is, that you don&#x27;t really get a picture and most of the traditional computer-vision approaches are unusable ;)<p>But still, such a camera is very neat, even if you just track the movement of your mice or want to balance a pencil on it&#x27;s head[2].<p>The real deal however is full vision based 3D-SLAM, which would allow for very fast and robust movement without any external help from motion capturing systems or expensive (in money, weight and power) sensors like laser scanners. AFAIK, we&#x27;re not there yet (see [3] for some work in that direction), but that would bring pizza delivery with a drone directly to your desk quite a bit closer to reality...<p>--<p>[1] <a href="http:&#x2F;&#x2F;www.inilabs.com&#x2F;products&#x2F;davis" rel="nofollow">http:&#x2F;&#x2F;www.inilabs.com&#x2F;products&#x2F;davis</a><p>[2] <a href="http:&#x2F;&#x2F;www.ini.ch&#x2F;~conradt&#x2F;projects&#x2F;PencilBalancer&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.ini.ch&#x2F;~conradt&#x2F;projects&#x2F;PencilBalancer&#x2F;</a><p>[3] <a href="http:&#x2F;&#x2F;rpg.ifi.uzh.ch&#x2F;research_dvs.html" rel="nofollow">http:&#x2F;&#x2F;rpg.ifi.uzh.ch&#x2F;research_dvs.html</a>, also <a href="https:&#x2F;&#x2F;www.doc.ic.ac.uk&#x2F;~ajd&#x2F;Publications&#x2F;kim_etal_bmvc2014.pdf" rel="nofollow">https:&#x2F;&#x2F;www.doc.ic.ac.uk&#x2F;~ajd&#x2F;Publications&#x2F;kim_etal_bmvc2014...</a>
tlb将近 10 年前
Title changed from &quot;Beginning of the End for Oversaturated Images&quot;
cozzyd将近 10 年前
Instagram and the like will ensure the future proliferation of shitty images, no matter the technology.