I played around with a machine learning demo and used a banana, apple and an orange for learning via webcam, and used speech synthesis to make it speak out load. After the accuracy was good I point the cam on my wife and it said: - 100% certainty a banana
I've been wanting for quite some time to build a device with a camera that could recognize my cat on the counter and turn on a servo that would release a jet of compressed air. It looks like I could actually use this for that.
Nice effort by Microsoft with a cool demo video. It looks like they haven't open sourced the app though:
<a href="https://www.reddit.com/r/Lobe/comments/jj8nb2/source_code_for_the_desktop_app/" rel="nofollow">https://www.reddit.com/r/Lobe/comments/jj8nb2/source_code_fo...</a><p>However, they do have opensourced bootstrap apps here: <a href="https://github.com/lobe" rel="nofollow">https://github.com/lobe</a>
I really like how the website is done. Visually and content-wise. It transports the message pretty well into my brain.
Concise, not overloaded, good font sizes and looks good on mobile and desktop.
Seems to be marketed as 'machine learning', but upon closer look is only for machine learning on <i></i>images<i></i>. Anyone know of something similar for analysis of other kinds of data; particularly interested to analyze records (like spreadsheet data)?<p>Love the info site design.
This seems pretty cool -- but one issue to me is that (similar to the chasm that exists in low-code app building once the magic doesn't suit you) that if I already have the skills to create a mobile app that integrates tensorflow, I probably also have the skills to train my models. It would be cool if feature-extraction (image pre-processing and first network layer(s)) could run on the front-end, and the rest of the network/search on the back-end, similar to how distributed speech recognition works. Then I could use a canned lib on the device that integrates w/the camera, and get my results via a websocket. (Of course, I could still run everything on the client still as well.)
What's that old joke? Something like in the 1980s a Media Lab teacher gives the class a computer vision assignment where they're supposed to be able to tell whether or not an image contains a bird, and 40 years later they're still working on it? Lobe.ai reminds me of trying to identify plants with Google Goggles 10(ish?) years ago. It didn't work very well then, and then Google killed Goggles. Side note: none of the "click on the leaf feature" web-based plant identifiers gave a satisfactory answer either.
I remember a previous version from about 2 years ago... I think Lobe.ai was web-based, at the time? And you could drag and drop various blocks around to do image recognition and analysis? I probably have some of the details wrong but the demos were very impressive.<p>While I never got approved for that beta (probably rightly so, I'm just some random person with no actual connection to ML or AI), I was excited to see what their work led to. Congrats on releasing this latest iteration and acquisition!
Why does this exist? I don't mean what do you use it for, the many uses are obvious, but why has a company (Microsoft) made it and released it for free.<p>Reading the license I assume it may change at some future version to require money to use it, and that a new version will install and then say please pay us to continue using? Or probably just this product is no longer available? Note these are not things I am thinking will happen but rather my theoretical assumptions to try to answer the question of why has Microsoft, a for profit company, made this closed source, free tool that I think might be pretty useful for a lot of people.
I must have missed it, but Lobe is owned by Microsoft. The product looks clean and well suited for CV 101 applications. Looks like a no-code meets AI solution. Anyone using it beyond research / personal project implementations?
One thing I thought of when I saw the demo video, that is probably on the team's radar:<p>There would be a lot of cool ways to improve the model by giving feedback, either showing training images where the model is uncertain, or some more advanced explanations for classifications flagged as incorrect, in order to guide the user to gather the training data that can improve it.<p>And possibly providing a summary of where it knows it works well.<p>There are a lot of benefits there, both for improving models people are building but also to help users understand why their model is performing as it does.
Lobe was a startup involving folks like Mike Matas that was acquired by Microsoft in 2018: <a href="https://blogs.microsoft.com/blog/2018/09/13/microsoft-acquires-lobe-to-help-bring-ai-development-capability-to-everyone/" rel="nofollow">https://blogs.microsoft.com/blog/2018/09/13/microsoft-acquir...</a>
The app is beautifully done. I'm really impressed by how well it works given the knobs available.<p>However I tried to train it to recognize some images of characters from an anime (so a little different than facial recognition), and I managed to break the model: achieving 64% error with significant number of examples per class. I think one downside is Lobe doesn't expose how potentially overconfident the model is. I would love the ability to take the existing model and test it on a new image that I can import into the app.<p>EDIT: I would love to see the following in a future version:<p>1. What are the percentages associated with each image per class. I see that an image was misclassified, but did it at least include my desired class in its top 5 predicted classes?<p>2. Test the model on unlabeled inputs directly in the app to see how well the model might generalize. I would like to see a "Test" tab on the left once training is complete.<p>3. View other metrics of model goodness like F-1 score and training details like CV partitions in the app somehow.<p>Again, this is a really cool idea :)
Works nicely <a href="https://photos.app.goo.gl/GjXgk7fgQ8c54pB19" rel="nofollow">https://photos.app.goo.gl/GjXgk7fgQ8c54pB19</a>
This looks really similar to Google AutoML! I wonder if there are any advantages to switching.<p>More info on AutoML: <a href="https://cloud.google.com/automl" rel="nofollow">https://cloud.google.com/automl</a>
There is a similar app from Google: <a href="https://teachablemachine.withgoogle.com/" rel="nofollow">https://teachablemachine.withgoogle.com/</a><p>Both apps are great!
One of the things it shows on the main page says "train an apps to count reps" while a lady is doing physical exercises.<p>This is next to ridiculous. I don't need an app or any assistance in counting my reps. I can do that myself. That's easy.<p>What I really dream of an app for is app to tell my mistakes in technique/posture for every particular exercise. I don't even mind putting a funny costume or some motion sensors on to make its job easier.
This is really neat, and to me this is revealing of the true power of machine learning. That you can program a computer not by telling it what to do and how to do it, but what you want it to do and it'll learn to imitate and replicate the behavior even on new inputs, very similar to how you teach people tasks.
I'm on the Label page of the app and it's asking for five images, but I don't have any... could you please give a few example sets of images, e.g. drinking/not drinking, holding up # of fingers, etc, so I don't have to create the images myself?
This begs the question: why not ship it as an app itself targeted to normal users and let them custom fit it to their needs (unless this is already the case and I am missing something thinking it's targeted to engineers to build their apps with)?
Is there any recommendations for any other robust plant/insect/microbe identification ML solutions? I usually post in respective subreddits for identification, would think of a ML solution but never acted on it.
I realise this is a Microsoft one, but roughly how many of these are there now? And are they all just productised frontends for the Python libraries like I think they are?
I will be very curious to see how this makes it into production applications. It demos extremely well, but that doesn’t necessarily translate into something that’s production ready.
I didn't see system requirements and limitations. Gpu required? Normally for training ML hardware matters, so odd this doesn't talk about that at all.
Oh wow.<p>its awesome for noobs like me to train things.<p>Thanks.<p>You should release this on android and market it hard. Remember machine learning will flourish when idiots like me can train to do to mundane task.
i am really looking for an ai service which is able to detect signatures and threads out of email messages and extract the „real new message“ part - does anyone know some tool?
Hmm seems the title of this post was changed and I cannot edit it anymore, can someone change it to 'for training' or 'to train'. In its current form it's incorrect and sounds like some sort of locomotive AI.