TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

No elephants: Breakthroughs in image generation

445 点作者 Kerrick大约 1 个月前

38 条评论

x187463大约 1 个月前
This is a before&#x2F;after moment for image generation. A simple example is the background images on a ton of (mediocre) music youtube channels. They almost all use AI generated images that are full of nonsense the closer you look. Jazz channels will feature coffee shops with garbled text on the menu and furniture blending together. I bet all of that disappears over the next few months.<p>On another note, and perhaps others are feeling similarly, but I am finding myself surprised at how little use I have for this stuff, LLMs included. If, ten years ago, you told me I would have access to tools like this, I&#x27;m sure I would have responded with a never ending stream of ideas and excitement. But now that they&#x27;re here, I just sort of poke at it for a minute and carry on with my day.<p>Maybe it&#x27;s the unreliability on all fronts, I don&#x27;t know. I ask a lot of programming questions and appreciate <i>some</i> of the autocomplete in vscode, but I know I&#x27;m not anywhere close to taking full advantage of what these systems can do.
评论 #43622753 未加载
评论 #43620054 未加载
评论 #43624122 未加载
评论 #43621039 未加载
评论 #43621226 未加载
评论 #43621386 未加载
评论 #43620152 未加载
评论 #43628700 未加载
评论 #43628999 未加载
评论 #43627600 未加载
评论 #43624399 未加载
评论 #43624245 未加载
评论 #43622929 未加载
评论 #43632217 未加载
评论 #43629953 未加载
评论 #43622412 未加载
card_zero大约 1 个月前
Looking at the example where the coffee table is swapped, I notice every time the image is reprocessed it mutates, based on the previous iteration, and objects become more bizarre each time, like chinese whispers.<p>* The weird-ass basket decoration on the table originally has some big chain links (maybe anchor chain, to keep the theme with the beach painting). By the third version, they&#x27;re leathery and are merging with the basket.<p>* The candelabra light on the wall, with branch decorations, turns into a sort of skinny minimalist gold stag head, and then just a branch.<p>* The small table in the background gradually loses one of its three legs, and ends up defying gravity.<p>* The freaky green lamps in the window become at first more regular, then turn into topiary.<p>* Making the carpet less faded turns up the saturation on everything else, too, including the wood the table is made from.
评论 #43621586 未加载
评论 #43620439 未加载
评论 #43620166 未加载
评论 #43622408 未加载
nowittyusername大约 1 个月前
There is circumstantial evidence out there that 4o image manipulation isn&#x27;t done within the 4o image generator in one shot but is a workflow done by an agentic system. Meaning this, user inputs prompt &quot;create an image with no elephants in the room&quot; &gt; prompt goes to an llm which preprocesses the human prompt &gt; outputs a a prompt that it knows works withing this image generator well &gt; create an image of a room &gt; and that llm processed prompt is sent to the image generator. Same happens with edits but a lot more complicated, meaning function calling tools are involved with many layers of edits being done behind the scenes. Try it yourself, take an image, send it it, and have the 4o edit it for you in some way, then ask it to edit again, and again, and so on. you will notice noticeable sepia filter being applied every edit, and the image ends up more and more sepia toned with more edits. This is because in the workflow that is one of the steps that is naively applied without consideration of multi edit possibility. If this was a one shot solution where editing is done within 4o image model by itself, the sepia problem wouldn&#x27;t be there.
评论 #43624748 未加载
评论 #43624723 未加载
评论 #43628462 未加载
评论 #43623808 未加载
评论 #43624376 未加载
评论 #43624574 未加载
评论 #43625042 未加载
probably_wrong大约 1 个月前
&gt; <i>Is it okay to reproduce the hard-won style of other artists using AI? Who owns the resulting art? Who profits from it? Which artists are in the training data for AI, and what is the legal and ethical status of using copyrighted work for training? These were important questions before multimodal AI, but now developing answers to them is increasingly urgent.</i><p>I have to disagree with the conclusion. This was an important discussion to have two to three years ago, then we had it online, and then we more or less agreed that it&#x27;s unfair for artists to have their works sucked up with no recourse.<p>What the post should say is &quot;we know that this is unfair to artists, but the tech companies are making too much money from them and we have no way to force them to change&quot;.
评论 #43620541 未加载
评论 #43620428 未加载
评论 #43620194 未加载
评论 #43628699 未加载
评论 #43621314 未加载
评论 #43620549 未加载
评论 #43620626 未加载
shubhamjain大约 1 个月前
The Ghibli trend completely missed the real breakthrough — and it’s this. The ability to closely follow text, understand the input image, and maintain context of what’s already there is a massive leap in image generation. While Midjourney delivered visually stunning results, I constantly struggled to get anything specific out of it, making it pretty much useless for actual workflows.<p>4o is the first image generation model that feels genuinely useful not just for pretty things. It can produce comics, app designs, UI mockups, storyboards, marketing assets, and so on. I saw someone make a multi-panel comic with it with consistent characters. Obviously, it&#x27;s not perfect. But just getting there 90% is a game changer.
评论 #43622434 未加载
gcanyon大约 1 个月前
It&#x27;s interesting to hear people side with the artists when in previous discussions on this forum I&#x27;ve gotten significant approval&#x2F;agreement arguing that copyright is far too long.<p>As I&#x27;ve argued in the past, I think copyright should last maybe five years: in this modern era, monetizing your work doesn&#x27;t (usually) have to take more than a short time. I&#x27;d happily concede to some sort of renewal process to extend that period, especially if some monetization method is in process. Or some sort of mechanical rights process to replace the &quot;public domain&quot; phase early on. Or something -- I haven&#x27;t thought about it <i>that</i> deeply.<p>So thinking about that in this process: everyone is &quot;ghiblifying&quot; things. Studio Ghibli has been around for very nearly 40 years, and their &quot;style&quot; was well established over 35 years ago. To me, that (should) make(s) it fair game.<p>The underlying assumption, I think, is that all the &quot;starving&quot; artists are being ripped off, but are they? Let&#x27;s consider the numbers -- there are a handful of large-scale artists whose work is obviously replicable: Ghibli, the Simpsons, Pixar, etc. None of them is going hungry because a machine model can render a prom pic in their style. Then you get the other 99.999% of artists, <i>all</i> of whose work went into the model. They <i>will</i> be hurt, but not specifically because <i>their</i> style has been ingested and people want to replicate <i>their</i> style.<p>Rather, they will be hurt because no one knows their style, nor cares about it; people just want to be able to say e.g. &quot;Make a charcoal illustration of me in this photo, but make me sitting on a horse in the mountains.&quot;<p>It&#x27;s very much like the arguments about piracy in the past: 99.99% of people were never going to pay an artist to create that charcoal sketch. The 0.01% who might are arguably causing harm to the artist(s) by not using them to create that thing, but the rest were never going to pay for it in the first place.<p>All to say it&#x27;s complicated, and obviously things are changing dramatically, but it&#x27;s difficult to make the argument that &quot;artists need to be compensated for their work being used to train the model&quot; without both a reasonable plan for how that might be done, and a better-supported argument for why.
评论 #43621277 未加载
评论 #43620652 未加载
评论 #43622775 未加载
评论 #43620801 未加载
评论 #43620578 未加载
评论 #43621086 未加载
评论 #43621057 未加载
评论 #43622677 未加载
评论 #43630248 未加载
评论 #43627516 未加载
评论 #43621338 未加载
评论 #43630162 未加载
评论 #43620763 未加载
haswell大约 1 个月前
&gt; <i>The question isn&#x27;t whether these tools will change visual media, but whether we&#x27;ll be thoughtful enough to shape that change intentionally.</i><p>Unfortunately I think the answer to this question is a resounding “no”.<p>The time for thoughtful shaping was a few years ago. It feels like we’re hurtling toward a future where instead we’ll be left picking up the pieces and assessing the damage.<p>These tools are impressive and will undoubtedly unlock new possibilities for existing artists and for people who are otherwise unable to create art.<p>But I think it’s going to be a rough ride, and whatever new equilibrium we reach will be the result of much turmoil.<p>Employment for artists won’t disappear, but certain segments of the market will just use AI because it’s faster, cheaper, and doesn’t require time consuming iterations and communication of vision. The results will be “good enough” for many.<p>I say this as someone who has found these tools incredibly helpful for thinking. I have aphantasia, and my ability to visualize via AI is pretty remarkable. But I can’t bring myself to actually publish these visualizations. A growing number of blogs and YouTube channels don’t share these qualms and every time I encounter them in the wild I feel an “ick”. It’ll be interesting to see if more people develop this feeling.
评论 #43626941 未加载
justinator大约 1 个月前
But the annotations are still wrong,<p><a href="https:&#x2F;&#x2F;substackcdn.com&#x2F;image&#x2F;fetch&#x2F;w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep&#x2F;https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bca939e-53c3-40cb-a7cc-6177d2765233_2544x985.png" rel="nofollow">https:&#x2F;&#x2F;substackcdn.com&#x2F;image&#x2F;fetch&#x2F;w_1456,c_limit,f_webp,q_...</a><p>(nice URL btw)<p>The room, the door, the ceiling are all of a scale to fit many sizes of elephants.
评论 #43630263 未加载
m4thfr34k大约 1 个月前
I am very impressed with the current image generators out there, 4o &#x2F; Leonardo &#x2F; etc., but I cannot wait until they include some step to actually &quot;check their work&quot;. Ask it to produce a watch with the time of 6:37. It fails every time, because almost all watch photos out there are set to a specific time, and seems like something an initial &quot;did I do this right&quot; check could confirm. The time example is trivial but a general &quot;does this output actually make sense considering what the user asked&quot; checked would be tremendously valuable.
Retr0id大约 1 个月前
I had a reasonable intuition for how the &quot;old&quot; method works, but I still don&#x27;t grok this new approach.<p>&quot;in multimodal image generation, images are created in the same way that LLMs create text, a token at a time&quot;<p>Is there some way to visualise these &quot;image tokens&quot;, in the same way I can view tokenized text?
评论 #43620124 未加载
评论 #43620390 未加载
NitpickLawyer大约 1 个月前
&gt; The results are not as good as a professional designer could create but are an impressive first prototype.<p>I like to look at how far we&#x27;ve come since the early days of Stable Diffusion. It was fascinating to play with it back then, but it quickly became apparent that it was &quot;generic&quot; and not suited for &quot;real work&quot; because it lacked consistency, text capabilities, fingers! and so on... Looking at these results now, I&#x27;m amazed at the quality, consistency and ease of use. Gone are the days of doing alchemy on words and adding a bunch of &quot;in the style of Rutkovsky, golden hour, hd, 4k, pretty please ...&quot; at the end of prompts.
smusamashah大约 1 个月前
I am waiting for when I could provide these a scene snippet from &quot;Hitchhiker&#x27;s Guide To Galaxy&quot; (or any book) and it could draw that for me. The gold planets, the waking up on the beach, total perspective vortex etc.<p>I like the book, but there are quite a few scenes which are quite hard to visualize and make sense. An image generator that can follow that language and detail will be amazing. Even more awesome will be if it remains consistent in follow ups.
评论 #43628289 未加载
评论 #43624073 未加载
评论 #43623152 未加载
orbital-decay大约 1 个月前
4o still exhibits the &quot;pink elephant effect&quot;, it&#x27;s just... subtler, and tends to reveal itself on a complex or confusing prompt. Negations are also still not handled properly, they tend to slightly confuse the model and decrease the accuracy of the answer or the generated picture. The same is true for any other LLM. Moreover, the author is asking the model to rationalize the decision he already made (&quot;tell me why there can&#x27;t be any elephants&quot;), which could work as an equivalent to a CoT step.<p>It&#x27;s &quot;just&quot; a much bigger and much better trained model. Which is a quality on its own, absolutely no doubt about that. Fundamentally the issue is still there though, just less prominent. Which kind of makes sense - imagine the prompt &quot;not green&quot;, what even is that? It&#x27;s likely slightly out of distribution and requires representing a more complex abstraction, so the accuracy will necessarily be worse than stating the range of colors directly. The result might be accurate, until the model is confused&#x2F;misdirected by something else, and suddenly it&#x27;s not.<p>I think in the end none of the architectural differences will matter beyond the scaling. What will matter a lot more is data diversity and training quality.
评论 #43620960 未加载
ziofill大约 1 个月前
I usually agree with most of Gary Marcus&#x27; points, but I&#x27;d really like to hear his take on this. One of his examples is that &quot;the system can&#x27;t generate a horse riding an astronaut&quot; and in fact I tried a lot in the past but it would always draw the astronaut on top of the horse. Well, here is the result now: <a href="https:&#x2F;&#x2F;postimg.cc&#x2F;QFtRjbHM" rel="nofollow">https:&#x2F;&#x2F;postimg.cc&#x2F;QFtRjbHM</a>
评论 #43623733 未加载
hansmayer大约 1 个月前
&gt; &quot; Image generation is likely to be very disruptive in ways we don’t understand right now. &quot; Is anyone getting tired of these formulations ? When a tech is disruptive, we know it immediately. Uber was disruptive. AirBnB, Gmail, Amazon, even Facebook at one point. You just knew it, nobody was writing long essays trying to justify those products. Robots generating statistically median images is impressive, but not disruptive at all. If something is &quot;likely&quot; to be &quot;disruptive&quot;, but in ways &quot;we don&#x27;t understand yet&quot;, how can the claim even be made? What is it based on? If we do not understand it yet, how can we understand if it is &quot;likely to be disruptive&quot;.
xnorswap大约 1 个月前
The &quot;How to build a boardgame&quot; infographic looks like half my linkedin &quot;feed&quot; now, but a boardgame instead of random basic programming &#x2F; recruimentment &#x2F; sales topic.<p>Feed is in quotes because my feed seems to be 90% suggested posts.
评论 #43622960 未加载
morkalork大约 1 个月前
Huh, the coffee table reminds me of all those cheap e-retailers who very clearly (and badly) photoshop their clothes on to same 2 or 3 stock model images. If you thought shopping online sucked before, it&#x27;s just going to get even worse now.
Zr01大约 1 个月前
I&#x27;m more interested in the technical details than the publicity. Pretty much anyone these days can learn what a diffusion model is, how they&#x27;re implemented, what the control flow is. What about this new multimodal LLM? They have no problems with text, they generate images using tokens, but how exactly? There&#x27;s no open-source implementations that I know of, and I&#x27;m struggling to find details.
评论 #43632514 未加载
lou1306大约 1 个月前
Putting my Wittgenstein hat on: How can I ever be sure that the machine is not generating an incredibly tiny elephant, maybe hidden under the sofa?
eapriv大约 1 个月前
It’s always fun to read posts like that: they say “look at this amazing thing it drew”, and the image is utter garbage.
klik99大约 1 个月前
I’ve seen a few YouTube thumbnail generation examples on Reddit (I’m on vacation so not gonna search for a link) that show multimodal with inline text giving specific instructions. It’s impressed me in a way that I haven’t been with LLMs for 2 years, IE it’s not just getting better at what it already does, but a totally new and intuitive way of working with generative AI.<p>My understanding is it’s a meta-LLM approach, using multiple models and having them interact. I feel like it’s also evidence that OpenAI is not seriously pursuing AGI (just my opinion, I know there’s some on here who would aggressively disagree), but rather market use cases. It feels like an acceptance that any given model, at least now, has its own limitations but can get more useful in combination.
qiqitori大约 1 个月前
Wha- wha- what? I tried to generate an image in ChatGPT after the announcement a while back and the image wasn&#x27;t bad, but the text on it (numbers) was nonsense. (Analog gauge with nonsense numbers instead of e.g. 10, 20, 30, 40, etc.)<p>Gave it another chance now, explicitly calling out the numbers. Well, they are improved but not sure how useful this result is (the spacing between numbers is a little off and there&#x27;s still some curious counting going on. Maybe it kind of looks like the numbers are pasted in after the fact?<p><a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;67f4fa33-70dc-8012-8e1e-2dea563d3def" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;67f4fa33-70dc-8012-8e1e-2dea563d3d...</a>
评论 #43621447 未加载
cadamsdotcom大约 1 个月前
Anyone stuck claiming AI isn’t useful - there are so many useful things it can now do. With text that makes sense you can generate invitations for your next picnic. That wasn’t possible mere weeks ago.<p>Wonderful to be alive for these step changes in human capability.
vunderba大约 1 个月前
4o, despite OpenAI&#x27;s practically draconian content policies, is a pretty big leap forward. I put together a comparison of some of the most competitive generative models (Imagen, 4o, Flux, and MJ7) where I prioritized increasingly difficult <i>prompt adherence</i>. If Imagen 3 had 4o&#x27;s multimodal capabilities (being able to make constant adjustments against a generated image by prompting) I would say its nearly on-par with 4o.<p><a href="https:&#x2F;&#x2F;genai-showdown.specr.net" rel="nofollow">https:&#x2F;&#x2F;genai-showdown.specr.net</a>
rkharsan64大约 1 个月前
Are there any local models that use this new approach to generating images?
评论 #43620657 未加载
评论 #43620781 未加载
roenxi大约 1 个月前
That proper &quot;no elephants&quot; first image is hilarious. Another key point here is the generative AI&#x27;s meme game is getting rather strong.<p>Which isn&#x27;t a small thing, humour is an advanced soft skill.
评论 #43621072 未加载
评论 #43622762 未加载
评论 #43620157 未加载
swframe2大约 1 个月前
I&#x27;m hoping this alternative image prompt preprocessing technique gets more attention:<p><a href="https:&#x2F;&#x2F;art-msra.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;art-msra.github.io&#x2F;</a><p>Basically, the user&#x27;s image prompt is converted to a several prompts to generate parts of the final image in layers which are combined. The layers are still available so that edits can cleanly update one section without affecting the others.
lupusreal大约 1 个月前
The image annotated to explain why no elephants are possible is very amusing.<p>To me, this kind of image generation isn&#x27;t very interesting for creating final products, but is <i>extremely</i> useful for communicating design intent to other people when collaborating on large creative projects. Previously I used crude &quot;ms paint&quot; sketches for this, which was much more tedious and less effective.
thrance大约 1 个月前
Each generation follows the prompt a little bit better than the last, but I don&#x27;t see any revolutions. Fingers are still messed up, eyes are wonky and legs sometimes still fork into two. Fundamentally it&#x27;s still the same diffusion technique, with the same limitations.
评论 #43629654 未加载
DonHopkins大约 1 个月前
Q: How do you know if there&#x27;s an elephant hiding under your bed?<p>A: Your face is pressed up against the ceiling!
mrconter11大约 1 个月前
Isn&#x27;t it ironic that it ended up being harder to get a computer to explicitly not create a photorealistic image of an elephant than to have create one?
freeamz大约 1 个月前
Hmmm isn&#x27;t stable diffusion already doing that?
评论 #43622658 未加载
Der_Einzige大约 1 个月前
The #1 reason that this technology won&#x27;t proliferate more quickly is that humans are a bunch of COOMERS!<p>We get Stable Diffusion V1.5 and SDXL and what does the community go do with it? Lmao see civit.ai and it&#x27;s literal hundreds of thousands of NSFW loras. The most popular model today on that website is the NSFW anime version of SDXL, called &quot;Pony Diffusion&quot; (I&#x27;m literally not making this up. A bunch of Bronies made this model!)<p>Imagine that an open source image generator which does tokens autoregressively like this at this quality is released.<p>The world is simply not ready for the amount of horny stuff that is going to be produced (especially without consent). It appears that the male libido really is the reason for most bad things in the world. We are truly the &quot;villains of history&quot;.
评论 #43630414 未加载
NiloCK大约 1 个月前
The &#x27;before&#x27; image passes the test this time in a &quot;Treachery of Images&quot; sort of way.
d4rkp4ttern大约 1 个月前
Diagrams are still a big unsolved problem. Making diagrams for a talk or paper is an extremely tedious process and I am still waiting for a good multimodal LLM solution for this. It should take a sketch and&#x2F;or text description of what you want and in a few iterations you should get what you want. GPT4o tries hard but results are still bad.
评论 #43621169 未加载
globnomulous大约 1 个月前
&gt; The past couple years have been spent trying to figure out what text AI models are good for, and new use cases are being developed continuously.<p>In other words, people who care about money and only money are pushing for these tools because they&#x27;re convinced they&#x27;ll reduce labor costs and somehow also improve the resulting product, while engineers and creative professionals who have these tools foisted upon them by unimaginative business people continue to insist that the tools are a solution in search of a problem, that they&#x27;re stochastic parrots and plagiarism automata that bypass all of the important parts of engineering and creativity and make the absolutely, breathtakingly idiotic mistake of supposing it&#x27;s possible to leap to a finished product without all the work and problem solving involved in getting there.<p>&gt; The line between human and AI creation will continue to blur<p>This is utter nonsense, and hype-man prognosticators in the tech world like the author of the article turn out pretty much 100% of the time to be either grifters or saps who have fallen for the grifters&#x27; nonsense.
1970-01-01大约 1 个月前
For the first 9 years of an elephant&#x27;s life, it can easily walk into that room. I don&#x27;t find this to be a breakthrough. I find it to be clickbait.
ge96大约 1 个月前
&gt; we guac you covered