TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Is it ok to use GPUs like the A4000 or RTX 3060 as inference servers?

1 pointsby kouohhashi9 months ago
When it comes to inference using cloud services, Tesla T4 GPUs are often used, but they are not cheap. It seems that creating a server room with A4000 workstations or RTX 3060 notes might be more cost-effective, but this could potentially violate Nvidia&#x27;s terms and conditions.<p>On the other hand, there are cloud services that advertise that using the A4000 for inference is acceptable. Does this mean that while support from Nvidia might not be available, it is implicitly tolerated by Nvidia?

1 comment

migf9 months ago
This is giving me such Beowulf Cluster deja vu