To me this was the least interesting part of the Stadia reveal. State share was the real innovation imo. Would any serious game studio use style transfer ml to actually replace their visual artists? Didn't seem like the quality was there.<p>That said, it is an awesome PoC. Just not something I see being practically applicable.
A neat tech demo, but as with so many things coming out of The Valley these days, I don't think anybody actually wants this. No actual game is so bland that you'd want the ability to overlay your own "themes" on it after the fact. AI has lots of potential for assisting artists, and maybe part of that will include post-processing effects, but putting such things in the player's hands is pointless.
this is certainly just a neat gimmick, but it's interesting to me for a particular reason: the Stadia box uses an AMD GPU, and per the Google announcement, all the style transfer is done in real time.<p>this suggests that we may soon have better AMD support in TensorFlow.
Consensus in this thread seems to be that this is a neat gimmick but nothing else.<p>Maybe because I have a friend working on a video game as a solo developer, and putting a /lot/ of energy/money/time into art--I really see the potential in this (type of thing).<p>This is not really for the player's benefit to put on whatever mods they want.<p>This is for fast iteration for artists and creators to automate a huge (sometimes necessary evil) burden.
It could be that they are throwing it out so that the community try to find a practical application of it. It is obvious that they understand that what they are demoing is not the end product so to speak, but just the best way to represent general idea.
I'm already excited to see what Microsoft will show at E3 with Xcloud streaming service that they have been working on for a couple of years- lots of great progress in that area right now.
I've seen a number of these impressive looking demos..for example: <a href="https://www.theverge.com/2018/12/3/18121198/ai-generated-video-game-graphics-nvidia-driving-demo-neurips" rel="nofollow">https://www.theverge.com/2018/12/3/18121198/ai-generated-vid...</a><p>Impressive sense of realism in level of detail variety..show the potential...BUT consistency, sharpness and realism of object boundaries would not survive close inspection.
This is not a consumer product at all. I don't know how useful it will be but it's designed to be development tool for early art direction visualization.<p>Even for 2d visualization it could be very useful if you could drop images into your website mockup files and have it intelligently apply color palette and texture information to your existing content.<p>It won't look good, but it might look <i>good enough</i> to save you from having to manually tweak the design 20-30 times to explore a range of styles.
I don't think there will be heavy users of style transfer among high budget games anytime soon, but the important part is that it's done in real time. With an assumption that it has a reasonable level of latency, probably AA, super resolution or any other neural net based quality improvement can be a real thing.
The useful application for AI in streamed gaming seems like it would be for predicting the next 30ms to avoid input lag.<p>This seems gimmicky at best.
Prediction has to take place on the <i>client</i> though - so the challenge is making it cheap.
Silent hill concepts could be fun with this. Otherwise eh. I would like to see Jojo palette swaps occur to highlight tension but I don’t see that being implemented with something like this.
Is anyone else concerned this will be yet another Google product that goes by the wayside in a few years? This has essentially become my biggest fear now with any new product they announce/release.