The egui framework they mention is pretty neat:<p><a href="https://emilk.github.io/egui/" rel="nofollow">https://emilk.github.io/egui/</a><p>It an entirely custom toolkit, so don't expect it to have a native look and feel, but it's a GPU-first design with multiple back-ends. It can be used in native OpenGL apps too. It's an immediate-mode UI, so it's very easy to build and update even complex windows. Great choice if you want to prototype a game.
Why don't apps like this leverage gpu hardware acceleration?<p>On my old tablet, Netflix runs a-ok, the media player has hardware acceleration and runs a-ok, but Amazon Prime stutters.<p>I haven't checked it in years, so maybe it is better now, but I am skeptical this is the case, because if the original creators of the Android app didn't think hardware accelerated video might be a nice idea, why would it be different now? This is a structural issue.
> Prime Video<p>> xyz [4K/UHD]<p>4K is not available when using a computer<p>> HD is not available because you're not using Windows<p>when using Windows<p>> HD is not available because you don't have HDCP<p>Graphics driver begs to differ.<p>Prime SD is usually somewhere between 240p and 320p, HD looks like a decently encoded 720p file. Never seen it but I'm guessing 4K might actually approach the quality of a 2007 Blu-Ray.
Prime video has unbelievably bad performance on LG WebOS, and it’s not the platform because Hulu, Netflix, Disney, YouTube are all fine. If this is new and makes their app work acceptably, then great!<p>Just checked and Prime using 90MB in my TV, while YouTube uses 214KB, so maybe I already have the wasm monstrosity.
And yet they can't even implement Picture-in-Picture mode on their Android app. A basic feature has been stable for years among any app that plays videos.<p>Not to mention that their show recommendations are the worst ever, I would be ashamed to be part of their ML team honestly. (For reference I'm an American/North African woman staying in France and all of my recommendations are for Indian action movies, literally all of them. I've never seen a single Bollywood film ever.)<p>Anyway this is to say, technical innovation won't undo the damage done by shitty product managers.
Personally, <i>to me</i>, whenever I read these stories about how Amazon is doing this novel use of WebAssembly, or how Uber is doing ludicrous engineering effort to keep their React-based app under 300MB for the App Store, I can't help but think:<p>"Man, that's an awful lot of work to avoid writing a native app."
Really interesting to see the adoption. Disney+ Shared the following: <a href="https://medium.com/disney-streaming/introducing-the-disney-application-development-kit-adk-ad85ca139073" rel="nofollow">https://medium.com/disney-streaming/introducing-the-disney-a...</a>
On systems such as iOS which you don't have JIT, performance is going to be real bad, as if you are dancing with a pair of iron made shoes. Honestly Amazon should afford going full native, at least for the scene and animation stuff.
What was the reason for web assembly versus just running rust directly on the target hardware? Is the idea that the constrained WASM environment requires less QA than if the hardware was targeted specifically by the rust compiler?
It's nice to see a big company like Amazon using Wasm in something that actually sees the light of production (and in something that's used by so many people). I always see tech giants supporting or contributing to newer technologies like Wasm without really using them in important applications.
Language success is in large parts driven by what system adopts that language.<p>Objective-C’s success was driven by iOS.<p>C’s success was driven by Unix.<p>C++’s early success was driven in large part by 90’s GUI frameworks ( MDC, OWL, Qt, etc.<p>I think Wasm is shaping up to be huge in that you can safely run, high performance code across multiple operating systems/CPU’s. Of all the languages, I think Rust has the best WASM tooling, and the 2020’s may end up being the WASM/Rust decade.
I'd be happy with surround (5.1) upon windows, alas it seems you can only get that via an Amazon device. HD 5.1 would suit me over any 4k and do feel many streaming services use the 4k etc as marketing more than anything else and fail upon the audio aspect in regards to surround, which IMHO add's way more to the viewing experience than a bump in resolution - though others millage may vary.
first time i've seen practical wasm system design and the transition written about. that being said, i would really love a deeper dive. the graphics were also insightful -- cheers, alexandru
Sounds like they are doing it wrong.<p><i>Our Wasm investigations started in August 2020, when we built some prototypes to compare the performance of Wasm VMs and JavaScript VMs in simulations involving the type of work our low-level JavaScript components were doing. In those experiments, code written in Rust and compiled to Wasm was 10 to 25 times as fast as JavaScript.</i><p>For video processing, especially high fidelity, high frequency, and high resolution video, I can see WASM crushing JavaScript performance by orders of magnitude. But, that isn’t this. They are just launching an app.<p>I have verified in my own personal application that I can achieve superior performance and response times in a GUI compared to nearly identical interfaces provided by the OS on the desktop.<p>There are some caveats though.<p>First, rendering in the browser is offloaded to the GPU so performance improvements attributed to interfaces in the browser are largely a reflection of proper hardware configurations on new hardware. The better the hardware the better a browser interface can perform compared to a desktop equivalent and I suspect the inverse to also be true.<p>Second, performance improvements in the browser apply only up to a threshold. In my performance testing on my best hardware that threshold is somewhere between 30000 to 50000 nodes rendered from a single instance. I am not a hardware guy but I suspect this could be due to a combination of JavaScript being single threaded and memory allocation designed for speed in a garbage collected logical VM opposed to allocated for efficiency.<p>Third, the developers actually have to know what they are doing. This is the most important factor for performance and all the hardware improvements in the world won’t compensate. There are two APIs to render any interface in the browser: canvas and the DOM. Each have different limitations. The primary interface is the DOM which is more memory intense but requires far less from the CPU/GPU, so the DOM can scale to a higher quantity of nodes without breaking a sweat but without cool stuff like animation.<p>There are only a few ways to modify the DOM. Most performance variations come from reading the DOM. In most cases, but not all, the fastest access comes from the old static methods like <i>getElementById</i>, <i>getElementsByTagName</i>, and <i>getElementsByClassName</i>. Other access approaches are faster only when there is not a static method equivalent, such as querying elements by attribute.<p>The most common and preferred means of DOM access are querySelectors which are incredibly slow. The performance difference can be as large as 250,000x in Firefox. Modern frameworks tend to make this slower by supplying additional layers of abstraction and unnecessarily executing querySelectors with unnecessary repetition.