TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Scale 3D, API for 3D labeling of LIDAR, camera, and radar data

85 pointsby aywover 7 years ago

6 comments

aywover 7 years ago
Hey everyone! I&#x27;m Alex, CEO and co-founder of Scale. One of the biggest bottlenecks to development in perception and vision for robotics and self-driving companies has been the ability to label 3D data. The ability to label LIDAR, camera, and radar data together has been able to massively accelerate our customers&#x27; timelines.<p>We&#x27;ve worked with a number of self-driving companies like GM Cruise, nuTonomy, Voyage, Embark, and more to build high-quality training datasets quickly leveraging our API. Scale is the perfect platform for this work—we&#x27;re focused on really high quality data produced by humans via API.
评论 #16434712 未加载
评论 #16433533 未加载
bringtheactionover 7 years ago
In the first demo you use WASD to move around. Personally I use a very different keyboard layout but that&#x27;s not what I was going to say but actually while we&#x27;re on the topic it should be noted that in some countries they use AZERTY [0] which means that more people are actually affected by the particular choice of WASD than just those of us who have chosen non-standard layouts like Dvorak or Colemak so maybe consider making it possible for the user to define keys to use after all even though I wasn&#x27;t going to suggest that.<p>Anyway, the thing I was going to suggest was to allow click-and-drag with mouse to be used in order to look around. Also touch-and-drag on mobile devices. And provide on-screen buttons for mobile device users to move forwards and backwards and sideways.<p>[0]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AZERTY" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AZERTY</a>
评论 #16433683 未加载
isawczukover 7 years ago
Hi @Alex. I love the job you are doing! Few questions:<p>1. Is pushing API for LIDAR means that ScaleAPI is going to focus more on self-driving tech?<p>2. How do you see your tech to comparison to what comma.ai is doing?<p>3. What is the minimum resolution for lidar data to make meaningful annotation?
mhb_engover 7 years ago
Very cool! Have you considered applying this technique for labeling Building Infrastructure Management (BIM) Pointcloud data? One of the challenges when dealing with as-built BIM capture is understanding exactly how point clouds are mapped to real-world features.
评论 #16435844 未加载
zawerfover 7 years ago
How do humans generate these labels? Not from this field and I am curious about the UI aspect of it. It&#x27;s not like you can give each of your min-wage labelers a VR headset&#x2F;controller to draw bounding boxes with.
评论 #16434119 未加载
xemokaover 7 years ago
Wait... I can&#x27;t be the only one thinking &quot;these are humans?&quot;
评论 #16435846 未加载