TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Fast,Compiled deep-learning based modules for inferencing on CPUs

6 pointsby eagledotRLabout 3 years ago
Hi HN,I am Anubhav from RamanLabs.We have been developing dedicated modules based on deep-learning for purposes like face-detection,object-detection,pose-estimation etc.<p>We hope to make it easy for developers,hobbyists to integrate such functionalities into their existing app&#x2F;pipeline at the cost of a few milliseconds.All our modules run end to end in super-realtime even on consumer-grade CPUs[0]. For now we provide only Python based API.<p>We provide Demo for each of the modules to allow testing for your desired data distribution.We also have a blog[1] where we hope to add more technical details about the framework used to develop these modules.<p>The framework used to develop these modules is completely written in Nim language.We wrap existing ops implementations from libraries like ONEDNN and write our own code where we cannot find one or existing implementation is not good enough,mainly for preprocessing and postprocessing code.Having full access to framework code and being written in a high level language allows us to port newer architectures and optimize them quickly.<p>We would love to hear your feedback on our attempt.<p>[0] Quad-core Cpu with AVX2 instructions. [1] &lt;<a href="https:&#x2F;&#x2F;ramanlabs.in&#x2F;static&#x2F;blog&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;ramanlabs.in&#x2F;static&#x2F;blog&#x2F;index.html</a>&gt;

no comments

no comments