TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Speed Is All You Need: On-Device Acceleration of Large Diffusion Models

56 pointsby Pelayuabout 2 years ago

3 comments

nlabout 2 years ago
Interestingly these are OpenCL kernels so in theory some of the optimizations might run out-of-the-box on CPUs.<p>It would be instructive to compare their speedups on the iPhone to the Apple CoreML implementation: <a href="https:&#x2F;&#x2F;github.com&#x2F;apple&#x2F;ml-stable-diffusion">https:&#x2F;&#x2F;github.com&#x2F;apple&#x2F;ml-stable-diffusion</a>
DennisAleynikovabout 2 years ago
This incredible, can&#x27;t wait to run it. Is there a code sample somewhere to reproduce their Samsung s23 results?
sigmoid10about 2 years ago
This is definitely a welcome development, but I&#x27;m getting so tired of all these papers trying to pay homage to the original Transformer paper in their title. It is neither funny anymore, nor does it give due credit or indicate quality and on top of that the original paper title was a pretty poor choice in hindsight, highlighting how the original authors didn&#x27;t foresee the gigantic impact of their paper.
评论 #35767377 未加载
评论 #35767423 未加载
评论 #35767430 未加载
评论 #35767399 未加载
评论 #35767223 未加载
评论 #35767555 未加载