TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How we used GPT-4o for image detection with 350 similar illustrations

222 点作者 olup4 个月前

20 条评论

sashank_15094 个月前
This has been my experience. Foundation models have completely changed the game of ML. Previously, companies might have needed to hire ML engineers familiar with ML training, architectures etc to get mediocre results. Now companies can just hire a regular software engineer familiar with foundation model API’s to get excellent results. In some ways it is sad, but in other ways the result you get is so much better than we achieved before.<p>My example was an image segmentation model. I managed to create an dataset of 100,000+ images and was training UNets and other advanced models on it, always reached a good validation loss but my data was simply not diverse enough and I faced a lot of issues in actual deployment, where the data distribution kept changing on a day to day basis. Then, I tried DINO v2 from Meta, finetuned on 4 images and it solved the problem, handled all the variations in lighting etc with far higher accuracy than I ever achieved. It makes sense, DINO was train on 100M + images, I would never be able to compete with that.<p>In this case, the company still needed my expertise, because Meta just released the weights and so someone had to setup the fine-tuning pipeline. But I can imagine a fine tuning API like OpenAI’s requiring no expertise outside of simple coding. If AI results depend on scale, it naturally follows that only a few well funded companies, will build AI that actually works, and everyone else will just use their models. The only way this trend reverses, is if compute becomes so cheap and ubiquitous, that everyone can achieve the necessary scale.
评论 #42694305 未加载
评论 #42695446 未加载
评论 #42694923 未加载
评论 #42697390 未加载
评论 #42694275 未加载
评论 #42694315 未加载
Imnimo4 个月前
It&#x27;s tough to judge without seeing examples of the targets and the user photos, but I&#x27;m curious if this could be done with just old-school SIFT. If it really is exactly the same image in the in the corpus and on the wall, does a neural embedding model really buy you a lot? A small number of high confidence tie points seems like it&#x27;d be all you need, but it probably depends a lot on just how challenging the user photos are.
评论 #42690601 未加载
评论 #42693224 未加载
JayShower4 个月前
Alternative solution that would require less heavy lifting of ML but a little more upfront programming: It sounds like the cars are arranged in a grid on the wall. Maybe it would be possible to narrow down which car the user took a photo of by looking at the photos of the surrounding cars as well, and hardcoding into the system the position of each car relative to one another? Could potentially do that locally very quickly (maybe even at the level of QR-code speed) versus doing an embedding + LLM.<p>Con of this approach would be that it’s requires maintenance if they ever decide to change the illustration positions.
评论 #42696014 未加载
suriya-ganesh4 个月前
This tracks with my experience. We built a complex processing pipeline for an NLP classification, search and comprehension task. Using vector database of Proprietary data etc.<p>We ran a benchmark of our system against an LLM call and the LLM performed much better for so much cheaper, in terms of dev time, complexity, and compute. Incredible time to be in working in the space seeing traditional problems eaten away by new paradigms
wongarsu4 个月前
Interesting approach to a a very interesting challenge, given how close the images supposedly are.<p>With the limited training data they have I&#x27;m surprised they don&#x27;t mention any attempts at synthetic training data. Make (or buy) a couple museum scenes in blender, hang one of the images there, take images from a lot of angles, repeat for more scenes, lighting conditions and all 350 images. Should be easy to script. Then train YOLO on those images, or if that still fails use their embedding approach with those training images.
评论 #42691315 未加载
olup4 个月前
First time for me posting this kind of story - I thought it would make an interesting case on solving a hard computer vision problem with a crafty product engineer team.
评论 #42689898 未加载
评论 #42692808 未加载
评论 #42695725 未加载
hackerdood4 个月前
Very neat explanation of solving these kinds of unique challenges, especially given how similar the illustrations were.<p>One question I had was, knowing how difficult it was to train the model with the base images, and given that the client didn’t have time to photograph them, did you consider flying someone out to the museum for a couple of days to photograph each illustration from several angles with the actual lighting throughout the day? Or potentially hiring a photographer near the museum to do that? It seems like a round trip ticket plus a couple nights in a hotel could have saved a lot of headache, providing more images to turn into synthetic training data. Even if you still had to resort to using 4o as a tiebreaker, it could be that you only present two candidates as the third might have a much lower similarity score to the second candidate. Good write up either way.
lynguist4 个月前
Huh I think this YouTube short is the same topic: <a href="https:&#x2F;&#x2F;youtube.com&#x2F;shorts&#x2F;DA_-6296G5o?si=BLKcSP2Q1jAaca9K" rel="nofollow">https:&#x2F;&#x2F;youtube.com&#x2F;shorts&#x2F;DA_-6296G5o?si=BLKcSP2Q1jAaca9K</a><p>Finding new geoglyphs from known examples.
GaggiX4 个月前
Is there a reason to choose VGG16 over more modern models?
评论 #42695194 未加载
kredd4 个月前
A bit tangential, but I think we will see a good chunk of small teams building competing products in different software business segments, by just doubling on productivity and offering a cheaper option due to less operational overhead (reads: paying engineers). I can think of at least two businesses that can be competed in costs if the team can automate a good chunk of it.
评论 #42690466 未加载
评论 #42691598 未加载
saint_yossarian4 个月前
I mean, cool tech, but why not just print a QR code next to each illustration?
评论 #42695028 未加载
评论 #42692316 未加载
评论 #42691705 未加载
评论 #42694395 未加载
评论 #42690408 未加载
评论 #42694957 未加载
yuvalr14 个月前
A completely different approach that don&#x27;t require heavy AI would be an app on the user phone that does this:<p>1. Measure the distance from the wall (standard image processing)<p>2. Use the rotations of the gyro sensors on the phone to conclude which car is being looked at<p>I wonder if this could be as accurate though
评论 #42696960 未加载
vessenes4 个月前
Thanks for the “bitter lesson” news from the frontlines. Curious; did you experiment with 4o as the sole pipeline? And of course as I think you mention, it would be interesting to know if say llama 8b could do a similar job as well.<p>Congrats on shipping.
评论 #42691681 未加载
the_duke4 个月前
Side question: is there any good model that allows for image similarity detection across a large image set, that can be incrementally augmented with new images?<p>You&#x27;d somehow have to generate an embedding for each image, I presume.
评论 #42699117 未加载
评论 #42701194 未加载
gunalx4 个月前
Cool real life use Case. Don&#x27;t think lmms usually get applied reasonably where they should be and I am glad that a generic knn model also was used to simplify costs and also just more suitable.
rldjbpin4 个月前
reads to me like 95% of the &quot;conventional AI&quot; was applied to the problem and then using llm in the end seems to work like a lucky three-faced dice.<p>when &quot;embeddings&quot; are used to perform closeness test, you are using a pretrained computer vision model behind the scenes. it is doing the far majority of tasks of filtering out hundreds of images down to a handful.<p>visual llm works on textual descriptions that seem far too close for similar images. regardless, more power to the team for finding something that works for them.
评论 #42696928 未加载
TZubiri4 个月前
Calling an llm and a cv model by the same name to give the appearance of agi is a pet peeve of mine.<p>And someone that&#x27;s not openai buying into this naming convention is just unpaid propaganda
评论 #42693336 未加载
schappim4 个月前
I would love to see the prompt &#x2F; image data sent to GPT-4o!
评论 #42695261 未加载
babyent4 个月前
This was a fun read. I’m not a AI expert by any means. I’m also ESL. Please bear with me.<p>However the inaccuracy threshold seems fine for a museum, but in enterprise operations inaccuracy can mean lost revenue or worse lost trust and future business flow.<p>I’m struggling with some more advanced AI use cases in my collaborative work platform. I use AI (LLMs) for things like summarizations, communication, finding information using embedding. However, sometimes it is completely wrong.<p>To test this I spent a few days (doing something unrelated) building up a recipes database and then trying to query it for things like “I want to make a quick and easy drink”. I ran the data through classification and other steps to get as good data as I could. The results would still include fries or some other food result when I’m asking for drinks.<p>So I have to ask what the heck am I doing wrong? Again, for things like sending messages and reminders or coming up with descriptions, and finding old messages that match some input - no problem.<p>But if I have data that I’m augmenting with additional information (trying to attach more information that maybe missing but possible to deduce from what’s available) to try and enable richer workflows I’m always being bit in the butt. I feel like if I can figure this out I can provide way more value.<p>Not sure if what I said makes sense.
评论 #42704915 未加载
gazchop4 个月前
I hear a lot of qualitative speak but nothing quantitative.