TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Launch HN: Bucket Robotics (YC S24) – Defect detection for molded and cast parts

111 点作者 lasermatts9 个月前
Hey Hacker News! We’re Matt and Steph from Bucket Robotics <a href="https:&#x2F;&#x2F;bucket.bot">https:&#x2F;&#x2F;bucket.bot</a> Bucket transforms CAD models into custom defect detection models for manufacturing: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;RCyguguf3Is" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;RCyguguf3Is</a><p>Injection molded and cast parts are everywhere – 50% of what’s visible on a modern car is injection molded – and these molds are custom created for each part and assembly line. Injection molding is a process where small plastic pellets are heated - primarily by friction from an auger - and pushed into a mold - usually two big milled out chunks of aluminum or steel - that are pushed together by somewhere between 10 tons and 1000s of tons of pressure. Once the plastic cools the machine opens up the mold and pushes the newly formed object out using rods called ejector pins. Look at a plastic object and you can usually find a couple round marks from the ejector pins, a mark from the injection site, a ridge where the faces of the molds meet, and maybe some round stamp marks that tell you the day and shift it was made on. (Link to a great explainer on the process: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;RMjtmsr3CqA?si=QjErT_rOU9-_TQ8d" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;RMjtmsr3CqA?si=QjErT_rOU9-_TQ8d</a>)<p>Defect detection is either traditional ML based – get a real-world sample, image it, label defect, repeat until there’s a big enough set to build a model – or done manually. Humans have an 80% success rate at detection - that gets worse throughout the day, because decision fatigue leads to deterioration in performance near lunch&#x2F;end-of-shift (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Decision_fatigue" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Decision_fatigue</a>). Creating an automated system usually takes somewhere between 2 days and 2 weeks to collect and label real world samples then build a model.<p>Injection molding is currently a 300 billion USD market, and as vehicle electrification increases, more of the total components of a car are injection molded making that market even bigger. And because so much of that surface area is customer-facing – any blemish, scratch, or burn is considered defective. Speaking to folks in the space, you can see a defect rate as high as 15% for blemishes as small as 1cm^2.<p>Our solution to this problem is to build the models off of CAD designs instead of real world data. An injection mold is usually machined aluminum or steel and can cost anywhere from $5k to &gt;$100k - usually with a significant lead time. So when customers send out their designs to the mold makers - or their CNC if they do it in-house - they can also send them to us in parallel and have a defect detection model ready to go long before their mold is even finished.<p>On the backend we’re generating these detection models by creating a large number of variations of the 3D model - some to simulate innocuous things like ejector pin marks and most to simulate various defects like flash. Once we have our 3D models generated we fire them off to the cloud to render photorealistic scenes with varied camera parameters, lighting, and obscurants (shops are dusty). Now that we have labeled images it’s a simple task to train a fairly off the shelf transformer based vision model from them and deliver it to the customer.<p>Running the model doesn’t require fancy hardware - our usual target device is an Orin Nano with a 12MP camera on it - and we run it purely on-device so that customer images don’t need to leave their worksite. We charge customers by the model — when they plan a line change to a new mold, ideally they’ll contact us and we’ll have their model ready before retooling is complete.<p>Injection molding is as error prone as it is cool to watch. For example, flash is a thin layer of extra plastic - usually hanging off the edge of the part or overhanging a hole in the part which makes parts defective aesthetically or can even prevent parts from joining up properly. It can happen for so many reasons. Too high an injection pressure, too low a clamping pressure, a grubby mold surface, mold wear, poor mold design, and that’s just to name a few!<p>Steph and I have a history of working on tasks performed manually that we want to automate – we’ve been working together for the last five years in Pittsburgh on self-driving cars at Argo AI, Latitude AI, and Stack AV. Before that, I worked at Michelin’s test track and Uber ATG. We really, really love robots.<p>Our first pitch to Y Combinator was, “build a better Intel RealSense” since it’s a universally used (and loathed) vision system in robotics. We built our first few units and started building demos for how folks could use our camera - and that’s when we found defect detection for injection molding and casting. Defect detection is understood and highly automated for things like PCBs – where a surface defect can indicate a future critical failure (hey that capacitor looks a little big?) but defect detection for higher volume&#x2F;lower cost parts is still too high a cost and effort for most shops.<p>We’re excited to launch Bucket with you all! We’d love to hear from the community – and if you know anyone working in industrial computer vision or in quality control, please connect us! My email is matt@bucket.bot – we can’t wait to see what you all think!

9 条评论

acyou9 个月前
Nice! I have so many questions.. How stable is the injection molding process once it&#x27;s fully proven out, up and running? Is it a bathtub curve shape, do defects keep randomly popping up?<p>What do you use on your end to label the ejector pin locations, parting lines, etc? Does this process use Hexagon software inputs to make that easier?<p>If you&#x27;re not relying so much on a skilled operator, would you be using a CMM for dimensional inspection anyways, and then would this be better solved with a CMM? How can you get quality parts if you don&#x27;t have a skilled operator anyways to set up the machine correctly and correct the defects? Are you ever going to be able to replace a good machine operator? Or this just helps reduce the inspection toil and burden? Do they usually need 100% inspection, or just periodic with binning?<p>Why do you want to target injection molded parts and not machined parts?<p>Don&#x27;t most of these machines have the parts just fall in a bin, with no robot arm? Doesn&#x27;t this seem like instead of paying a good injection mold tech, now you&#x27;re paying for an injection mold tech and a robotics tech, if you have to program the arm path for every part setup?<p>How many defects are &quot;dimensional&quot; and how many are &quot;cosmetic&quot; ?<p>Can a defect detection model accept injection mold pressure curves as input? Isn&#x27;t that a better data source for flash and underfilling?<p>Is this supposed to get retrofit, or go on new machines?
评论 #41370782 未加载
a1rb4Ck9 个月前
Very cool. Good luck! I used to work on this. Your synthetic dataset pipeline is really neat. A foundation model of molding defects might be feasible. I hope you will also work on the whole inline quality control problem. From what I saw of the field, sometimes you only get the final quality days after painting, finishing or cool down of big parts. And the quality metric is notably undefined for visual defect, using the cad render as a reference is a good solution. Because plastic is so cheap and the process so stable, I have seen days of production shredded for a tiny perfectly repeated visual defects. Injection molding machines are heavily instrumented [0] and I tried to mix in-mold sensors + process parameters + photo + thermography of hot parts [1] (sry it&#x27;s in french, might find better doc later). [0] <a href="https:&#x2F;&#x2F;scholar.google.com&#x2F;citations?view_op=view_citation&amp;hl=en&amp;user=QHwwvHMAAAAJ&amp;citation_for_view=QHwwvHMAAAAJ:bFI3QPDXJZMC" rel="nofollow">https:&#x2F;&#x2F;scholar.google.com&#x2F;citations?view_op=view_citation&amp;h...</a> [1] <a href="https:&#x2F;&#x2F;a1rb4ck.github.io&#x2F;phd&#x2F;#[128,%22XYZ%22,85.039,614.438,null]" rel="nofollow">https:&#x2F;&#x2F;a1rb4ck.github.io&#x2F;phd&#x2F;#[128,%22XYZ%22,85.039,614.438...</a>
评论 #41374242 未加载
jjk1669 个月前
I&#x27;m an engineer at company that injection molds parts for medical and industrial devices. This seems extremely promising.<p>Can your scene generator handle things like custom tooling? For example if I were to place a part to be inspected on a clear acrylic jig, could the model be trained to look through the acrylic?<p>We&#x27;re currently already using a vision system to measure certain features on the parts, can your models be applied to generic images, or does it require integration with the camera?<p>How does the customer communicate the types and probable locations of potential defects? Or do you perform some sort of mold simulation to predict them? Likewise how does the customer communicate where defects are critical versus non-critical?<p>Finally how does pricing work? Does it scale based on part size, or does the customer select how many variations or do you do some analysis ahead of time and generate a custom quote? Is it a one time cost or is it an ongoing subscription? Could you ballpark a price range for generating a model for a part roughly 3.5 inches in diameter and 1.5 inches tall with moderate complexity?<p>Feel free to reach out to the email in my profile if you&#x27;d like to discuss a little more in depth.
评论 #41369674 未加载
评论 #41369500 未加载
JofArnold9 个月前
Mechanical engineer turned software engineer here; I love this kind of stuff and I frequently wonder how I might apply my software expertise to that domain again. Amongst other things I worked in automotive and the components I worked on were forged and heat treated high strength steels. The defects in forged components are often very small (tens of microns) but I&#x27;d be curious if this could work there. We used powerful microscopes - including electron microscopes - on the production lines so maybe that would work?
评论 #41379938 未加载
评论 #41443659 未加载
chfritz9 个月前
Nice use case! Can you elaborate a bit more on robotics piece? What role does the robot play? I assume it&#x27;s required to turn the part around for inspection. If so, how do you (automatically?) compute the grasping pointing? Also feel free to find me on LinkedIn if you want to chat more about growing a robotics businesses and&#x2F;or geometric reasoning for manufacturing.
评论 #41368611 未加载
doctorpangloss9 个月前
This looks cool.<p>&gt; Steph and I have a history of working...<p>I have so many questions, since you are experienced.<p>Do you think there should be import tariffs on Chinese made EVs?<p>I know your gut is telling you, don&#x27;t answer this question, but that is like, the biggest and most important story in autos manufacturing, no? It would be like saying, if cars were extremely cheap, so that everyone could have one, the manufacturing story for free cars must already be sufficient, and so there isn&#x27;t much demand for innovation to make things cheaper. But in real life, the thing that makes cars cheap or expensive is a law, which could disappear with a stroke of a pen, so it&#x27;s interesting to get your POV.<p>&gt; On the backend we’re generating...<p>OpenAI, Anthropic, Stability, etc. have already authored 3D model to synthetic data pipelines - why won&#x27;t they do this one?
评论 #41369724 未加载
knicholes9 个月前
I apologize for such a naive comment, as I don&#x27;t have experience in this field, but I&#x27;ve seen OpenAI do some pretty impressive image recognition tasks (multimodal LLMs). Have you tried uploading some images of successful injection castings and some of unsuccessful injection castings (they don&#x27;t even have to be of the same mold), telling it &quot;These are examples of success&quot; &quot;these are examples of failures, e.g. flashing, blemish, scratch, etc&quot; and feeding it picture(s) of the casted object?<p>It&#x27;d be interesting to hear how effective that is.
评论 #41371242 未加载
edg50009 个月前
How would this compare against producing a 3D mesh using traditional photogrammetry and comparing the CAD model and mesh for deviations? Or would this be unrealistic since the photogrammetrically produced mesh would lack the level of detail required?
评论 #41376660 未加载
anomaly239 个月前
What is the difference between this and traditional FEA done on CAD models? Is this purely a cost benefit?