TE
TechEcho
Home
24h Top
Newest
Best
Ask
Show
Jobs
English
GitHub
Twitter
Home
Portable LLM Across GPUs/CPUs/OSes: WASM for Cloud-Native and Edge AI [video]
1 points
by
3Sophons
9 months ago
1 comment
3Sophons
9 months ago
Live demos start from 10:40, • Setting up LlamaEdge API server. • Packaging it as a Wasm container image. • Running the image using Docker. • Deploying in a Kubernetes cluster.