A couple thoughts:<p>Unless your image library is way fancier than I imagine, you would get a much less biased result if you convert to a linear color space. This code:<p><pre><code> # Invert grayscale image
def invertImage(image):
return (255-image)
</code></pre>
doesn't accurately compute amount of thread cover desired because "image" isn't linear in brightness.<p>For this particular use case, though, you probably want to further transform the output. Suppose that one thread locally reduces light transmission by a factor of k. Then two reduce by k^2 (assuming there's enough blurring, everything is purely backlit, no reflections etc), and so on. So n reduces the log of the brightness by n*log(k). I would try calculating -log(luminosity) and fitting threads to that.<p>Finally, this sounds like a wonderful application of compressed sensing. I bet you could get an asymptotically fast algorithm that is reasonably close to optimal.