TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Inceptionism: Going Deeper into Neural Networks

797 点作者 neurologic将近 10 年前

44 条评论

davedx将近 10 年前
Worth reading the comments too.<p>One from Vincent Vanhoucke: &quot;This is the most fun we&#x27;ve had in the office in a while. We&#x27;ve even made some of those &#x27;Inceptionistic&#x27; art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or &#x27;wants&#x27; to see simply by following gradients. This &#x27;guided hallucination&#x27; technique is very powerful and the gorgeous visualizations it generates are very evocative of what&#x27;s really going on in the network.&quot;
评论 #9737392 未加载
评论 #9739171 未加载
评论 #9738276 未加载
philipn将近 10 年前
The reason they look so &#x27;fractal-like&#x27; (e.g. trippy!) is because they actually <i>are</i> fractals!<p>In the same way a normal fractal is a recursive application of some drawing function, this is a recursive application of different generation or &quot;recognition -&gt; generation&quot; drawing functions built on top of the CNN.<p>So I believe that, given a random noise image, these networks don&#x27;t generate the crazy trippy fractal patterns directly. Instead, that happens by feeding the generated image back to the network over and over again (with e.g. zooming in between).<p>Think of it a bit like a Rorschach test. But instead of ink blots, we&#x27;d use random noise and an artificial neural network. And instead of switching to the next Rorschach card after someone thinks they see a pattern, you continuously move the ink blot around until it looks more and more like the image the person thinks they see.<p>But because we&#x27;re dealing with ink, and we&#x27;re just randomly scattering it around, you&#x27;d start to see more and more of your original guess, or other recognized patterns, throughout the different parts of the scattered ink. Repeat this over and over again and you have these amazing fractals!
评论 #9740960 未加载
评论 #9741404 未加载
评论 #9737171 未加载
评论 #9737399 未加载
评论 #9738191 未加载
meemoo将近 10 年前
Tweak image urls for bigger images:<p>Ibis: <a href="http:&#x2F;&#x2F;3.bp.blogspot.com&#x2F;-4Uj3hPFupok&#x2F;VYIT6s_c9OI&#x2F;AAAAAAAAAlc&#x2F;_yGdbbsmGiw&#x2F;s6400&#x2F;ibis.png" rel="nofollow">http:&#x2F;&#x2F;3.bp.blogspot.com&#x2F;-4Uj3hPFupok&#x2F;VYIT6s_c9OI&#x2F;AAAAAAAAAl...</a> Seurat: <a href="http:&#x2F;&#x2F;4.bp.blogspot.com&#x2F;-PK_bEYY91cw&#x2F;VYIVBYw63uI&#x2F;AAAAAAAAAlo&#x2F;iUsA4leua10&#x2F;s6400&#x2F;seurat-layout.png" rel="nofollow">http:&#x2F;&#x2F;4.bp.blogspot.com&#x2F;-PK_bEYY91cw&#x2F;VYIVBYw63uI&#x2F;AAAAAAAAAl...</a> Clouds: <a href="http:&#x2F;&#x2F;4.bp.blogspot.com&#x2F;-FPDgxlc-WPU&#x2F;VYIV1bK50HI&#x2F;AAAAAAAAAlw&#x2F;YIwOPjoulcs&#x2F;s6400&#x2F;skyarrow.png" rel="nofollow">http:&#x2F;&#x2F;4.bp.blogspot.com&#x2F;-FPDgxlc-WPU&#x2F;VYIV1bK50HI&#x2F;AAAAAAAAAl...</a> Buildings: <a href="http:&#x2F;&#x2F;1.bp.blogspot.com&#x2F;-XZ0i0zXOhQk&#x2F;VYIXdyIL9kI&#x2F;AAAAAAAAAmQ&#x2F;UbA6j41w28o&#x2F;s6400&#x2F;building-dreams.png" rel="nofollow">http:&#x2F;&#x2F;1.bp.blogspot.com&#x2F;-XZ0i0zXOhQk&#x2F;VYIXdyIL9kI&#x2F;AAAAAAAAAm...</a><p>I&#x27;d love to experiment with this and video. I predict a nerdy music video soon, and a pop video appropriation soon after.
评论 #9736837 未加载
评论 #9738286 未加载
moyix将近 10 年前
This appears to be the source of the mysterious image that showed up on Reddit&#x27;s &#x2F;r&#x2F;machinelearning the other day too:<p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;3a1ebc&#x2F;image_generated_by_a_convolutional_network&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;MachineLearning&#x2F;comments&#x2F;3a1ebc&#x2F;ima...</a>
评论 #9736942 未加载
评论 #9737333 未加载
评论 #9737881 未加载
murbard2将近 10 年前
Two remarks<p>1) Captain obvious says: the &quot;tripiness&quot; of these images is hardly coincidental, these networks are inspired by the visual cortex.<p>2) They had to put a prior on the low level pixels to get some sort of image out. This is because the system is trained as a discriminative classifier, and it never needed to learn this structure, since it was always present in the training set. This also means that the algorithm is going to be ignoring all sort of structures which are relevant to generation, but not relevant for discrimination, like the precise count and positioning of body parts for instance.<p>This makes for some cool nightmarish animals, but fully generative training could yield even more impressive results.
ghoul2将近 10 年前
This is brilliant! I did something similar when I was trying to learn about neural networks a long long time ago. The results were fascinating.<p>I was writing a neural network trainer - to recognize simple 2D images. This was on a 300MHz desktop PC(!) so the network had to be pretty small. Which implied that the input images were just compositions of simple geometric shapes - a circle within a rectangle, two circles intersecting, etc.<p>When I tried &quot;recalling&quot; the learnt image after every few X epochs of training, I noticed the neural network was &quot;inventing&quot; more complex curves to better fit the image. Initially, only random dots would show up. Then it would have invented straight lines and would try to compose the target image out of one and more straight lines.<p>What was absolute fun to watch was, at some point, it would stop trying to compose a circle with multiple lines and just invent the circle. And then proceed to deform the circle as needed.<p>During different runs, I could even see how it got stuck into various local minima. To compose a rectangle, mostly the net would create four lines - but having the lines terminate was obviously difficult. As an alternative, sometimes the net would instead try a circle, which it would gradually elongate, straighten out the circumference, slowly to look more and more like a rectangle.<p>I was only an undergrad then, and was mostly doing this for fun - I do believe I should have written it up then. I do not even have the code anymore.<p>But good to know googlers do the same kinda goofy stuff :-)
pault将近 10 年前
I would love to see what would come out of a network trained to recognize pornographic images using this technique. :)
评论 #9737009 未加载
评论 #9737182 未加载
评论 #9736992 未加载
评论 #9736952 未加载
gradys将近 10 年前
Does anyone have a good sense of what exactly they mean here:<p>&gt;Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.<p>Specifically, what does &quot;we then pick a layer and ask the network to enhance whatever it detected&quot; mean?<p>I understand that different layers deal with features at different levels of abstraction and how that corresponds with the different kinds of hallucinations shown, but how does it actually work? You choose the output of one layer, but what does it mean to ask the network to enhance it?
评论 #9739646 未加载
评论 #9739518 未加载
评论 #9739311 未加载
评论 #9741071 未加载
评论 #9739546 未加载
simonster将近 10 年前
The fractal nature of many of the &quot;hallucinated&quot; images is kind of fascinating. The parallels to psychedelic drug-induced hallucinations are striking.
评论 #9736979 未加载
评论 #9736826 未加载
评论 #9736813 未加载
评论 #9737188 未加载
评论 #9737176 未加载
评论 #9737162 未加载
intjk将近 10 年前
I&#x27;ll repeat what I posted on facebook because I thought it was clever: &quot;Yes, but only if we tell them to dream about electric sheep.&quot;<p>So, tell the machine to think about bananas, and it will conjure up a mental image of bananas. Tell it to imagine a fish-dog and it&#x27;ll do its best. What happens if&#x2F;when we have enough storage to supply it a 24&#x2F;7 video feed (aka eyes), give a robot some navigational logic (or strap it to someone&#x27;s head), and give it the ability to ask questions, say, below some confidence interval (and us the ability to supply it answers)? What would this represent? What would come out on the other side? A fraction of a human being? Or perhaps just an artificial representation of &quot;the human experience&quot;.<p>...what if we fed it books?
评论 #9737636 未加载
评论 #9738688 未加载
fizixer将近 10 年前
Some comments seem to be appreciating (or getting disgusted by) the aesthetics but I think the &quot;inceptionism&quot; part should not be ignored:<p>We&#x27;re essentially peeking inside a very rudimentary form of consciousness: a consciousness that is very fragile, very dependent, very underdeveloped, and full of &quot;genetic errors&quot;. Once you have a functioning deep learning neural network, you have the assembly language of consciousness. Then you start playing with it (as this paper did), you create a hello world program, you solve the factorial function recursively, and so on. Somewhere in that universe of possible programs, is hidden a program (or a set of programs) that will be able to perform the thinking process a lot more accurately.
评论 #9739411 未加载
davesque将近 10 年前
This is one of the most astounding things I&#x27;ve ever seen. Some of these images look positively like art. And not just art, but <i>good</i> art.
评论 #9739145 未加载
评论 #9738011 未加载
评论 #9740496 未加载
评论 #9737718 未加载
评论 #9739859 未加载
anigbrowl将近 10 年前
These images are remarkably similar to chemically-enhanced mammalian neural processing in both form and content. I feel comfortable saying that this is the Real Deal and Google has made a scientifically and historically significant discovery here. I&#x27;m also getting an intense burst of nostalgia.
joeyspn将近 10 年前
The level of resemblance with a psychotropics&#x27; trip is simply fascinating. It&#x27;s definitely <i>really close</i> to how our brain reacts when is flooded with dopamine + serotonin.<p>I wonder if the engineers at Google can make the same experiment with audio... It&#x27;ll be funny to listen the results.
评论 #9741461 未加载
评论 #9741930 未加载
guelo将近 10 年前
I&#x27;m starting to come around to sama&#x27;s way of thinking on AI. This stuff is going to be scary powerful in 5-10 years. And it will continue to get more powerful at an exponential rate.
评论 #9742199 未加载
gojomo将近 10 年前
Facial-recognition neural nets can also generate creepy spectral faces. For example:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XNZIN7Jh3Sg" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XNZIN7Jh3Sg</a><p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ogBPFG6qGLM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ogBPFG6qGLM</a><p>(Or if you want to put them full-screen on infinite loop in a darkened room: <a href="http:&#x2F;&#x2F;www.infinitelooper.com&#x2F;?v=XNZIN7Jh3Sg&amp;p=n" rel="nofollow">http:&#x2F;&#x2F;www.infinitelooper.com&#x2F;?v=XNZIN7Jh3Sg&amp;p=n</a> <a href="http:&#x2F;&#x2F;www.infinitelooper.com&#x2F;?v=ogBPFG6qGLM&amp;p=n" rel="nofollow">http:&#x2F;&#x2F;www.infinitelooper.com&#x2F;?v=ogBPFG6qGLM&amp;p=n</a> )<p>The code for the 1st is available in a Gist linked from its comments; the creator of the 2nd has a few other videos animating grid &#x27;fantasies&#x27; of digit-recognition neural-nets.
IanCal将近 10 年前
The one generated after looking at completely random noise on the bottom row, second from the right:<p><a href="http:&#x2F;&#x2F;googleresearch.blogspot.co.uk&#x2F;2015&#x2F;06&#x2F;inceptionism-going-deeper-into-neural.html" rel="nofollow">http:&#x2F;&#x2F;googleresearch.blogspot.co.uk&#x2F;2015&#x2F;06&#x2F;inceptionism-go...</a><p>Reminds me very heavily of The Starry Night <a href="https:&#x2F;&#x2F;www.google.com&#x2F;culturalinstitute&#x2F;asset-viewer&#x2F;the-starry-night&#x2F;bgEuwDxel93-Pg?utm_source=google&amp;utm_medium=kp&amp;hl=en-GB&amp;projectId=art-project" rel="nofollow">https:&#x2F;&#x2F;www.google.com&#x2F;culturalinstitute&#x2F;asset-viewer&#x2F;the-st...</a><p>Lovely imagery.<p>I never had much luck with generative networks. I did some work putting RBMs on a GPU partly because I&#x27;d seen Hinton talk showing starting with a low level description and feeding it forwards, but always ended up with highly unstable networks myself.
评论 #9737214 未加载
评论 #9737778 未加载
henryl将近 10 年前
I&#x27;ll be the first to say it. It looks like an acid&#x2F;shroom trip.
评论 #9736990 未加载
frankosaurus将近 10 年前
Really cool. You could generate all kinds of interesting art with this.<p>I can&#x27;t help but think of people who report seeing faces in their toast. Humans are biased towards seeing faces in randomness. A neural network trained on millions of puppy pictures will see dogs in clouds.
评论 #9738703 未加载
djfm将近 10 年前
Now I&#x27;m thinking about all those google cars, quietly resting in dark garages, dreaming about streets.
nl将近 10 年前
I&#x27;d really like to see what an Electric Sheep looks like. Maybe if they did a collaboration with the Android team?
评论 #9737100 未加载
评论 #9737014 未加载
jakozaur将近 10 年前
Request for startup: Neural Network on demand artist.<p>E.g. SaaS that takes your images and use neural network transformations. Can you make a portrait of my that I look like king.
tzs将近 10 年前
Understanding what is going on in a neural network (or any other kind of machine learning mechanism) when it makes a decision can be important in real world applications.<p>For example, suppose you are a bank and you have used built a neural network to decide if credit applications should be approved. The lending laws in the US require that if you reject someone you tell them why.<p>Your neural network just gives a yes&#x2F;no. It doesn&#x27;t give a reason. What do you tell the applicant?<p>I have an idea how to deal with that, but I have no idea if it would satisfy the law. My approach is to run their application through multiple times, tweaking various items, until you get one that would be approved. You can then tell them it was that item that sunk them. For instance, suppose that if you raise their income by $5k, you get approval. You can tell them they were rejected for having income that is too low.
评论 #9742739 未加载
waffl将近 10 年前
While I think this is beautiful, conceptually, I really am a bit terrified of the potential of this in reverse (the neural network for processing&#x2F;understanding an image). With Google releasing their &#x27;Photos&#x27; app, this network is about to get a direct pipeline for machine learning imagery to accelerate everything – my main fear would be the potential for this technology to be employed by weaponized drones able to scan a scene (with, eventually, incredibly high resolution cameras and microphones that far surpass human capability) and identify every single object&#x2F;person in realtime (also at a rate that humans are incapable of).<p>Of course, there is great utility to be had as well, it just scares me to think about what could be done with this technology, in a mature form, if used for violent purposes.
评论 #9738171 未加载
dnr将近 10 年前
Am I the only one who found those images somewhat disturbing? I wonder if they&#x27;re triggering something similar to <a href="http:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;trypophobia" rel="nofollow">http:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;trypophobia</a>
评论 #9737604 未加载
评论 #9738040 未加载
评论 #9737365 未加载
tomlock将近 10 年前
These paintings remind me of Louis Wain&#x27;s work when he was mentally ill.<p>Which makes me wonder, are these sophisticated neural nets mentally ill, and what would a course of therapy for them be like?
评论 #9737198 未加载
hliyan将近 10 年前
Am I the only person who is not entirely happy about the overuse of the pop-culture term &#x27;inception&#x27; for everything that is remotely nested, recursive or strange-loop-like?<p><pre><code> In this paper, we will focus on an efficient deep neural network architecture for computer vision, codenamed Inception, which derives its name from the Network in network paper by Lin et al [12] in conjunction with the famous “we need to go deeper” internet meme [1]</code></pre>
评论 #9737017 未加载
评论 #9739861 未加载
agumonkey将近 10 年前
Do computers dream about fractal antilopes ?<a href="http:&#x2F;&#x2F;i.imgur.com&#x2F;jZtbz7f.png" rel="nofollow">http:&#x2F;&#x2F;i.imgur.com&#x2F;jZtbz7f.png</a>
sitkack将近 10 年前
This was recently posted to HN, <a href="http:&#x2F;&#x2F;tjake.github.io&#x2F;blog&#x2F;2013&#x2F;02&#x2F;18&#x2F;resurgence-in-artificial-intelligence&#x2F;" rel="nofollow">http:&#x2F;&#x2F;tjake.github.io&#x2F;blog&#x2F;2013&#x2F;02&#x2F;18&#x2F;resurgence-in-artific...</a><p>Which mentions running the NN in reverse, quote<p><pre><code> By far the most interesting thing I’ve learned about Deep Belief Networks is their generative properties. Meaning you can look inside the ‘mind’ of a DBN and see what it’s imagining. Since a deep belief networks are two-way like restricted boltzmann machines you can make hidden inputs generate valid visual inputs. Continuing with our handwritten digit example you can start with the label input say a ‘3’ label and activate it then go reverse through the DBN and out the other end will pop out a picture of a ‘3’ based on the features of the inner layers. This is equivalent to our ability to visualize things using words, go ahead imagine a ‘3’, now rotate it.</code></pre>
评论 #9739469 未加载
anigbrowl将近 10 年前
I understand the theory behind neural networks quite well, but am not so clear on how you feed them with images, eg how do you build a network that can process megapixel images of random aspect ratios or audio files of predictable length?<p>I&#x27;, trying to get a sense of how much effort would be involved to replicate these results if Google isn&#x27;t inclined to share its internal tools, to do a neural network version of Fractint as it were, which one could train oneself. I have no clue which of the 30-40 deep learning libraries I found would be best to start with, or whether my basic instinct (to develop a node-based tool in ab image&#x2F;video compositing package) is completely harebrained.<p>Essentially I&#x27;m more interested in experimenting with tools to do this sort of thing by trying out different connections and coefficients than in writing the underlying code. Any suggestions?
评论 #9741256 未加载
huskyr将近 10 年前
Very cool. I wonder if there&#x27;s some example code on Github to generate images like this?
kriro将近 10 年前
Pretty interesting and beautiful. If I was still at my old job I&#x27;d love to try and see how helpful this is in teaching NN. My first instinct is that it would be really valuable because they tend to be blackboxy&#x2F;hard to conceptualize.
bearzoo将近 10 年前
They are doing nothing but starting with random noise, and then learning a representation of an image that will maximize the probability in the output layer (by suggesting to the network that this noise should have actually been recognized as a banana or what have you) and back propagating changes into the input layer. Essentially, this has been happening since 2003 in the natural language processing world where we learn &#x27;distributed representations&#x27; of words by starting with random representations of words, and learning them by context by back propagating changes into the input layer. Very cool though.
Animats将近 10 年前
This is fascinating. And important. We need better ways to see what neural nets are doing. At least for visual processing, we now have some.<p>This might be usable on music. Train a net to recognize a type of music, then run it backwards to see what comes out.<p>Run on the neural nets that do face popout (face&#x2F;non face, not face recognition), some generic face should emerge. Run on nets for text recognition, letter forms should emerge. Run on financial data vs results ... optimal business strategies?<p>But calling it &quot;inceptionism&quot; is silly. (Could be worse, as with &quot;polyfill&quot;, though.)
评论 #9737320 未加载
darkFunction将近 10 年前
I don&#x27;t understand what kind of NN they used on the painting and the photo of the antelopes(?). What was it pre-trained to recognise?<p>EDIT: in clarification, to pick out abstract features of an image, it must obviously be trained on many images. I&#x27;m curious about how it picked out seemingly unique characteristics of the painting, and what images it was trained on to get there.
m-i-l将近 10 年前
This story has been picked up by The Guardian: <a href="http:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2015&#x2F;jun&#x2F;18&#x2F;google-image-recognition-neural-network-androids-dream-electric-sheep" rel="nofollow">http:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2015&#x2F;jun&#x2F;18&#x2F;google-ima...</a>
mkj将近 10 年前
Has anyone seen an explanation for the why the images end up with that colour palette?
评论 #9739558 未加载
mraison将近 10 年前
Really nice. I&#x27;d be interested in seeing a more in-depth scientific description of how these images were actually generated. Are there any other publications related to this work?
评论 #9737805 未加载
imh将近 10 年前
It would be interesting to know what happens if instead of tweaking it to better match a banana, they tweaked it to better match a banana and NOT match everything else.
antirez将近 10 年前
Are the coefficients of the neurons inside the layers to &quot;trigger&quot; just multiplied by some constant? Not cited in the original article apparently.
patcon将近 10 年前
Wow. Funny how those images look like dreamscapes when the trained neural nets process random noise...<p>Kinda make me contemplate more own conscious experience :)
spot将近 10 年前
a really early version of this: <a href="http:&#x2F;&#x2F;draves.org&#x2F;fuse&#x2F;" rel="nofollow">http:&#x2F;&#x2F;draves.org&#x2F;fuse&#x2F;</a> published as open source in the early 90s. not NN but does have the same image matching&#x2F;searching.
评论 #9738892 未加载
stared将近 10 年前
I am curious what do they see if feed with a screenshot of their own code.
jastr将近 10 年前
If anyone wants to send this to their non-dev friends, here&#x27;s the write-up I sent to mine!<p><a href="https:&#x2F;&#x2F;medium.com&#x2F;@stripenight&#x2F;seeing-how-computers-might-think-e8ea3d1de081" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@stripenight&#x2F;seeing-how-computers-might-t...</a><p>---<p>tldr: To figure out how computers &quot;think&quot;, Google asked one of its artificial intelligence algorithms to look at clouds and draw the things it saw!<p>There&#x27;s this complex Artificial Intelligence algorithm called a neural network ( <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Artificial_neural_network" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Artificial_neural_network</a> ). It&#x27;s essentially code which tries to simulate the neurons in a brain.<p>Over the last few years, there have been some really cool results, like using neural networks to read people&#x27;s handwriting, or to figure what objects are in a picture.<p>To start your neural network, you give it a bunch of pictures of dogs, and tell it that those pictures contain dogs. Then you give it pictures of airplanes, and say those are airplanes, etc. Like a child learning for the first time, the neural network updates its neurons to recognize what makes up a dog or an airplane.<p>Afterwords, you can give it a picture and ask if the pic contains a dog or an airplane.<p>The problem is that WE DON&#x27;T NOW HOW IT KNOWS! It could be using the shape of a dog, or the color, or the distance between it&#x27;s legs. We don&#x27;t know! We just can&#x27;t see what the neurons are doing. Like a brain, we don&#x27;t quite know how it recognize things.<p>Google had a big neural network to figure out what&#x27;s in an image, and they wanted to know what it did. So, they gave the neural net a picture, but stopped the neural net at different points, before it could finish deciding. When, they stopped it, they asked it to &quot;enhance&quot; what is just recognized. Eg. if it just saw the outline of a dog, the net would return the picture with the outline a bit thicker. Or, if it saw the colors similar to a banana, it would return the picture with those colors looking more like a banana&#x27;s colors.<p>This seems like a simple idea, but it&#x27;s actually really complex, and really insightful! Amazing images here - <a href="https:&#x2F;&#x2F;photos.google.com&#x2F;share&#x2F;AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB" rel="nofollow">https:&#x2F;&#x2F;photos.google.com&#x2F;share&#x2F;AF1QipPX0SCl7OzWilt9LnuQliat...</a><p>Original article - <a href="http:&#x2F;&#x2F;googleresearch.blogspot.com&#x2F;2015&#x2F;06&#x2F;inceptionism-going-deeper-into-neural.html" rel="nofollow">http:&#x2F;&#x2F;googleresearch.blogspot.com&#x2F;2015&#x2F;06&#x2F;inceptionism-goin...</a>