A couple thoughts:<p>Unless your image library is way fancier than I imagine, you would get a much less biased result if you convert to a linear color space. This code:<p><pre><code> # Invert grayscale image
def invertImage(image):
return (255-image)
</code></pre>
doesn't accurately compute amount of thread cover desired because "image" isn't linear in brightness.<p>For this particular use case, though, you probably want to further transform the output. Suppose that one thread locally reduces light transmission by a factor of k. Then two reduce by k^2 (assuming there's enough blurring, everything is purely backlit, no reflections etc), and so on. So n reduces the log of the brightness by n*log(k). I would try calculating -log(luminosity) and fitting threads to that.<p>Finally, this sounds like a wonderful application of compressed sensing. I bet you could get an asymptotically fast algorithm that is reasonably close to optimal.
I would look at tomographic reconstruction (<a href="https://en.wikipedia.org/wiki/Tomographic_reconstruction" rel="nofollow">https://en.wikipedia.org/wiki/Tomographic_reconstruction</a>) algorithms for this for inspiration. This is a very similar problem: what projections best approximate a given image?
This is so weird, I just decided to make one of these thread images last night and came up with the same algorithm which runs in about 10 minutes. But I think the loom robot is the real piece of work here. It takes me about an hour to string up 3000 passes manually, but on the other hand, I like my design of nails on a painted piece of plywood better. :) Nice work!
Very interesting! The inspiration behind this post, artist Petros Vrellis, uses computer algorithm too: <a href="http://artof01.com/vrellis/works/knit.html" rel="nofollow">http://artof01.com/vrellis/works/knit.html</a>
I did something like this with a pen plotter and CMYK pens. I biased the random walk to follow/bounce off edges.<p><a href="http://imgur.com/a/n7OOd" rel="nofollow">http://imgur.com/a/n7OOd</a>
OMG, thank you! I am the creator of <a href="https://comments.network/" rel="nofollow">https://comments.network/</a> which you are using on the bottom of the article, and it's the first time I spot it in the wild! If you have any problem/suggestion/anything just write me: public@francisco.io<p>BTW I love what you made, have you considered selling those? It looks like it could have a similar business model as instapainting: <a href="http://instapainting.com/" rel="nofollow">http://instapainting.com/</a>
Very cool. I wonder if there could be an improvement in image quality if you added some form of lookahead, and chose the path that gives the best results. The branching factor would be horrendous, but I suspect that it could be pruned significantly using some A*-ish cost/ordering criterion.
There seems to be one potential improvement: it seems the original artist algorithm privileges detail in more detailed areas than an overall correctness of line positioning<p>(Might be that it tries to optimize for edge contours)
Tobias I think it's extremely cool, nice work.<p>I am curious, how would the final quality of the result be affected by:<p>- Changing the shape to say, a square. Or maybe a circle is optimal given a limited number of points.<p>- Inreasing the number of endpoints. There must be some limit to this. For example, even giving an infinite number of endpoints would not look like a photo, but how much better could it look?
As a physicist my first approach would be using Radon transform [1]. Should be tweaked a little, as overlapping threads are not additive in color (for black threads it's multiplicative though).<p>[1] <a href="https://en.m.wikipedia.org/wiki/Radon_transform" rel="nofollow">https://en.m.wikipedia.org/wiki/Radon_transform</a>
See also <a href="https://github.com/fogleman/primitive" rel="nofollow">https://github.com/fogleman/primitive</a>
Very cool! How much is it affected by the method of new pin selection after each time we add a line? I.e. currently you are doing oldPin = bestPin, and the first pin is selected at random... wouldn't it be better to just add lines based on a Hough transform (with rounding to the nearest pins) - starting from the line which covers the maximum number of points, and going down?
really nice!
in your fitness function - wouldn't you prefer to normalize somehow for the the line's length? i.e. - longer lines (going between two far away pins) will cover more dark pixels, and are more likely to be picked, even though they might also cover many bright pixels. I wonder if the average darkness of pixels along the line will work better?
I did something like this myself:<p><a href="https://github.com/danielvarga/string-art" rel="nofollow">https://github.com/danielvarga/string-art</a><p>My algorithm uses a bit fancier math: I reformulate the approximation problem as a sparse least squares problem over positive integers, solve the relaxation, and truncate and quantize the solution. It works quite well in practice, check out the images.
That's awesome, nice work!<p>Personally, can't say I understand image pre-processing well but did you notice that original Petros Vrellis images have deeper black color in some parts like face edges or hair and much lighter parts in cheeks and foreheads which creates a more detailed portrait?<p>Also, this name reminds me Jason Bourne movies as they have Operation Treadstone there.
An interesting modification could be to have a small straight wall instead of each pin. A ball, whose trail remained visible, could bounce off each wall and create the image. The orientation of all the walls would determine what image was produced.