This somehow reminds me of Marc Levoy's HDR+ [0] that uses the same exposure time for each shot instead of the conventional bracketing. The technique can take many same short exposure time shots and computationally make the photo brighter (among other HDR effects) with reduced noise, which in a way emulates some benefits of longer exposure time, although not the motion blur effect.<p>[0] <a href="https://blog.research.google/2014/10/hdr-low-light-and-high-dynamic-range.html" rel="nofollow">https://blog.research.google/2014/10/hdr-low-light-and-high-...</a>
Theoretically, it should be possible to create images from a static video feed that have higher resolution than the video.<p>Also, it would be interesting to see if an upscaling model can de trained on a specific high-res image (taken e.g. with a dslr) to upscale a video feed of the same place.