I remember when pose detection was announced, showing an app that corrected your workout movements.
i have yet to see an app that actually does that. i'd love to have the equivalent of a personal trainer showing me where i need to adjust my pose in say pushups or other simple excercies.<p>thus im equally sceptical of seeing these apis used. it seems developers are mostly porting web apps to all platforms ignoring neat but platform specific apis like this.<p>please prove me wrong and link some awesome apps that use pose detection.
To be clear does it mean access on Apple devices and not like an Apache 2 licensed Github repository?<p><a href="https://twitter.com/yeemachine/status/1656391928223768576?s=20" rel="nofollow">https://twitter.com/yeemachine/status/1656391928223768576?s=...</a><p><a href="https://mediapipe-studio.webapps.google.com/demo/face_landmarker" rel="nofollow">https://mediapipe-studio.webapps.google.com/demo/face_landma...</a><p><a href="https://github.com/google/mediapipe">https://github.com/google/mediapipe</a>
Where are the actual bindings? Linking to a page with lots of long videos that are mostly not available (it says "Available on June 6" (or 7,8,9) with a editorialized title that is not even on the page is below HN standards.
Just in case you were wondering, animals seems to be just cats and dogs: <a href="https://developer.apple.com/documentation/vision/vnanimalidentifier" rel="nofollow">https://developer.apple.com/documentation/vision/vnanimalide...</a>
We had tried their vision framework for pose, the accuracy was not great compared to other open source models. Hope they solve the issues with the new release.<p>@lgrebe: Check XTRAVISION and let me know if that is what you were looking for.
Demo: <a href="https://demo.xtravision.ai/" rel="nofollow noreferrer">https://demo.xtravision.ai/</a>
Does this mean Apple is making it easier to run models on Macs? I have a fairly powerful Mac studio, but I've found it very hard to run any model on it.
(feel free to correct me if I am wrong), but my main gripe against mobile ML frameworks (Android too) is they require the app to embed the ML model with the app (as opposed to the OS storing the model like a shared library).<p>People with limited storage on low-end device don't have enough memory to store the apps.
I would be interested to know of a consistent, on-board embeddings model. Trying to reduce latency and dependence on API calls for simple vector database search will go a long wag
TL;DR: Good step for the entire market, productization is the harder problem.<p>I had been formerly involved with Kemtai, which built a fantastic physical therapy/fitness experience (in my biased view) using motion tracking.<p>If anyone's interested, it is running well and quickly over WebGL on a pretty impressive share of regular phones and laptops across all platforms with WebGL (not just Apple)<p>My learnings is that the hard part is the productization on top of motion tracking: what constitutes an exercise? What is a "good" performance? How to build the authoring workflow for the many hundreds to low thousands of exercises necessary to reach a typical user base?<p>In any case, that's awesome news. There are literally billions of people whose condition is going to be better via motion tracking based health and fitness. May it grow there, and quickly!