> ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview.<p>That's...not great? For comparison, ARKit on iOS is going to support 400 million devices at launch (very rough numbers: ARKit runs on any new iPhone Apple's released over the past two years - iPhone 6S/SE/7 - and they sell over 200 million a year). Hardware fragmentation is a tough problem to solve.
Occasionally I'll write an app for my kids or wife. Every time I'm thoroughly impressed by the Apple development ecosystem and thoroughly disgusted by Google's for Android.<p>This is no different. The Android development process is painful (the most verbose, cruft and boilerplate filled Java), cumbersome to organize and build (Gradle is terrible, and buggy) and debug (the integration with Studio is just clunky). About the only thing Google does better is testing releases through the developer console.<p>It's nice to see them finally providing something similar to ARKit. I just wish they'd work on all the other things that make Android development a horrible experience.
I commented here on Tango 3.5 yrs ago: "remains to be seen if Google can persuade cellphone manufacturers to include 2 special cameras + 2 extra processors in their future devices." Looks like that was the case.<p>It appears the ARCore API is well designed and 1-1 feature equivalent to ARKit, i.e. VIO + plane estimation + ambient light estimation. The API's even share a lot of names, e.g. Anchor, HitTest, PointCloud, LightEstimate.<p>Now that stable positional tracking is an OS-level feature on mobile, whole sets of AR techniques are unlocked. At Abound Labs (<a href="http://aboundlabs.com" rel="nofollow">http://aboundlabs.com</a>), we've been solving dense 3D reconstruction. Other open problems that can be tackled now include: large-scale SLAM, collaborative mapping, semantic scene understanding, dynamic reconstruction.<p>With Qualcomm's new active depth sensing module, and Apple's PrimeSense waiting in the wings (7 yrs old, and still the best depth camera), the mobile AR field should become very exciting, very fast.
It seems very odd that this comes out and seemingly replaces Tango. Google spent a lot of time going over new stuff with Tango in the latest Google I/O keynote and Google Lens, which was featured quite a bit, seems like it relies on Tango and its depth sensing hardware for their "VPS" stuff.<p>Also, when Clay Bavor was talking about Tango supported devices he remarked that the devices were getting smaller and smaller then implied it was coming to smaller, more traditional devices. I took this to mean they were close to getting the sensors ready for wide deployment but I suppose this could have just meant they were ditching the sensors because they felt the software was good enough.<p>I'm kind of disappointed. I'd hoped that he was saying that Tango sensors would show up on Pixel 2 (which was a long shot, from the leaked photos not really showing the many sensors you see on current Tango devices). Instead we have what feels like a rushed out me-too to match ARKit.
Looks like a lot of people on this thread think Google's goal is to beat Apple with the features, but in my opinion that's not the case.<p>Google really has nothing to lose by following iOS lead, it's good that they "gave up" on Tango and decided to follow ARKit because that means Google is not trying to beat iOS with Android, but trying to commoditize iOS.<p>You really can't beat Apple at its own game, it's best to let go of that foolish goal and focus on trying to nullify whatever leverage Apple has with their few years lead.<p>Sure ARCore won't be installed on a lot of devices now, but in a couple of years they probably will (This is not the same as the Android ecosystem currently being fragmented because AR provides an entirely new type of UX and will be significant enough for people to get a new phone), and as long as Android gets there Google will have achieved its goal--commoditize AR.<p>In the end, Apple will have made tons of money with their iDevices, Google will NOT have, but they will have gained enough AR user-base that they can use it as their leverage, everybody wins.
It's funny how much marketing speak these big companies feel obliged to cram in. "At Android scale" -> "to catch up with Apple's ARKit"<p>It's actually impressive that Google is able to change direction and and get this software-only AR out the door so quickly to compete with Apple, but they still don't want to admit that's what they're doing.
Also it looks like Google is retiring the "Tango" brand [1].<p><a href="https://techcrunch.com/2017/08/29/google-retires-the-tango-brand-as-its-smartphone-ar-ambitions-move-wider/" rel="nofollow">https://techcrunch.com/2017/08/29/google-retires-the-tango-b...</a>
This is something I have been personally pushing the Google AR team on for at least a year and well before ARKit came out. I'm glad to see that ARKit made them actually move on this.<p>Google had been dead set on pushing Tango hardware to OEMs in the hopes that they would be able to lower BOM on the hardware. Everyone in who has been in AR long enough knew that wasn't going to happen and that monocular SLAM in software was the way forward on mobile.<p>The key thing now for AR devs is that they will have fairly comparable monoSLAM capabilities available on both Android and iOS for their apps.<p>HOWEVER that just means that the tracking portion of the equation is solved for developers. A few years ago it was possible to make a cross platform monoSLAM app if you used a handful of tools like Kudan or Metaio. Obviously ARKit and ARCore are going to be more robust with better longevity, however the failure of uptake of AR apps was not because of poor tracking, it was because there is an inherent lack of stickiness with AR use cases on mobile. That is, they are good for short infrequent interactions, but rarely will you need to use the SLAM capabilities of an AR app everyday or even multiple times a week.<p>This is why I am so invested in WebAR, because you can deploy an AR capability outside of a native app and the infrequent use means it can have longevity and a wider variety of users.<p>Yes, for those apps that people use all the time it will be very valuable, but if you look at the daily driver apps like FB, IG, Snap etc... they are already building the AR ecosystems into their own SLAM. All this does is lower overhead for them. For the average developer it doesn't solve the biggest problems in AR.<p>Kudos to Google, but developers need to really understand the AR use cases, implementations and UX if they want to use these to good effect.
Even with ARCore and the new ML system in Oreo, Google can’t match iOS, due to the install base of Oreo being nothing now, and won’t be over 20% for another 2 years.
Apple's ARkit is going to bring a whole new swath of exclusive apps to iOS. These APIs currently can’t be recreated on android, which means most apps wont be able to be ported with all features, if at all.
It’s becoming harder and harder for devs to be cross platform and Google is falling behind Apple.
Apple and now Google are making it easier to produce AR apps, but the tech has been there for years. I made my first AR demo some 8 years ago for a big event I was working with (on a laptop).<p>IMO AR in smartphones and tablets is a fad that in 2 years nobody will care about. Remember all those gyroscope/accelerometer based games? Yeah me neither.<p>Maybe AR will be awesome when someone (Apple? Microsoft?) releases a pair of lightweight glasses that can produce stereoscopic images superimposed seamlessly over reality, but we are still very, very far away from that.
Sweet! This is amazing and I was hoping this was going to happen sooner rather than later.<p>How long until they update the ChromiumAR project with support for ARCore and when will that preview and then be available? I know that tons of people are waiting on that:<p><a href="https://github.com/googlevr/chromium-webar" rel="nofollow">https://github.com/googlevr/chromium-webar</a>
Very interesting. Just paging through the docs the library doesn't seem like a very hard to use at all. The devil might be in the details and it's hard to say how rushed it was after ARKit but they had the parts and bits required for it already done in some form or another.
I ran (the sample app) it on my galaxy s8 and its a bit slow sometimes, but it tracks tables well. Floor not so..<p>Anybody knows where i can find more apk sample apps to test?
Also check out: <a href="https://venturebeat.com/2017/08/28/8th-wall-raises-2-4-million-for-augmented-reality-tools/" rel="nofollow">https://venturebeat.com/2017/08/28/8th-wall-raises-2-4-milli...</a><p>Supports Unity, and works on both iOS and Android out of the box. (I'm not affiliated, just a supporter.)
I'm glad to see this, I'll enjoy experimenting (probably via the aframe ar api) BUT<p>What are the useful applications for AR outside of verticals?<p>I've not seen anything compelling in the phone only incarnation.<p>The headsets have a lot of engineering issues ie many years to overcome.<p>Even with headsets its unclear the value of adding the visual clutter and noise that most ambient/immersive computing demonstrations seem to assume.<p>Whatever value you can add generally requires constant headset wear for it to be ready to hand. This puts even harder engineering problems on the industry as it forces super light and easy headsets (google glass was not AR nor a technical path to it).<p>Not seeing it yet.
Do I understand correctly that one big difference between AR on Android vs iOS is that the next iPhone will have advanced 3d sensing abilities that are currently years away on Android phones?
It can be seen from the video (from the way they avoid it) that the "augmentation" is always superimposed on the "reality". I.e. someone can't walk in front of the virtual objects you put on the table.<p>Is that a limitation of ARKit too?<p>What would it take to make it "real 3D"?
I'm a bit skeptical about the performance to be honest: Great tracking for AR requires careful selection and tuning of cameras and IMUs (inertial measurement units -- essentially MEMS gyro + accelerometer).<p>Apple has very tight control over their components so they can do this but managing this across a million OEMs and device models (as it is with the Android ecosystem) is close to impossible.<p>Tango tried to solve the problem by specifying out a software and hardware stack for OEMs to use but now it looks like Google is just too jealous to let have Apple have a good time with ARkit, therefore the "me too".
What’s with the majority of the shots being crop shots, or shots that don’t involve the object moving completely in or out of the frame? Seems to me like it’s potentially hiding some visual defects
I'd rather they rewrote Camera2 API, which is the most horrible API I've seen in my 20+ years in this profession. It's so bad, one might think it's an elaborate prank, but no, Google does expect you to use it to interact with cameras. That's why all photo apps on Android are so ridiculously bad compared to iOS.
Haha, check out the commits on their github for three.ar.js: <a href="https://github.com/google-ar/three.ar.js/commits/master" rel="nofollow">https://github.com/google-ar/three.ar.js/commits/master</a><p>> Build and increment to 0.1.1<p>> jsantell committed 26 minutes ago (failed)<p>....<p>> Fix linting<p>> jsantell committed 24 minutes ago (success)<p>edit: aww come on folks it's all in good fun