To put this in perspective, 5000 people at $200k per year (which is conservative if you include benefits, etc.) is 1 billion dollars in comp per year.<p>So OpenAI is spending a billion dollars over the next several years.<p>Microsoft is spending a billion dollars per year.<p>Google, etc. do the math.<p>There's literally billions of dollars now being spent on moving deep learning forward. Pretty amazing when I think back to 2011 and there were machine learning conferences where literally no one I spoke to had heard of deep learning.<p>People worrying about a second AI winter are like the people that have been worrying about an "internet bubble" since 2004. It's fine to be worried, there will be bubbles, but this time it's different and there are many reasons for that. There is no "internet industry" anymore; it's become more segmented and just plain <i>bigger</i>. Similarly, there will be no "AI industry"; it will branch out, and there are more potential applications of "understanding data and automating decisions" than there were in the 80s.<p>> something that they are branding AI - I might wait and see how much of it is on real AI - what ever that is.<p>Yeah, I wouldn't get hung up on that, they're calling it AI because that's easier to explain to reporters than deep learning. The difference is deep learning techniques are already being used in released products and companies are looking to do more of that, so there is a very real definition and set of goals associated with what these groups are doing, it's not just, "hey everyone, let's make an ai!"
I feel like, as a solo app developer, I'm being left out of this 'revolution'.<p>It seems like deep learning only makes sense if you have enough data to feed the algorithm with. The kind of data that only big companies can produce or harvest.<p>Sure you can produce some video and audio data and you can spider the web a little bit, but that doesn't even come close to the resources that these big corporations have and the 'depth' of learning that they can achieve.<p>So I'm not even trying.<p>Or should I ?<p>Is there any place for solo/indy developers in this field ?
A friend of mine was an early investor in DeepMind. For like a year and a half, because that's how long it took Google to buy them out for somewhere around $400mm.<p>At the time, I thought that sounded like an amazing exit (OK, it still sounds like an amazing exit), and wasn't clear how Google could get that value out in a reasonable timeframe.<p>I was so, so, wrong. The amount of value hidden and public in that acquisition is astounding. Whoever put it together deserves a massive bonus, ideally in Alphabet stock.<p>MS putting $1bn a year in on AI is a catch-up game. They may do very well at it, but make no mistake -- we are only seeing the public side of the value Google is generating. I don't imagine we'll ever see blogposts about how they're tuning adwords using AI, for instance. But you can bet the same sort of gains they are seeing with translations, audio generation, game playing they are seeing in the ad space.
Reading this carefully, the 5000-person group is "AI and Research" not just AI. (5000 people would be a lot to have working on AI.) This group includes Bing, Cortana and the current research group, so there are a lot of people not working directly on AI. That said, it is a significant change in focus, making AI a priority.
We seem to be at an inflection point with AI -- companies across the board are investing billions of dollars into AI R&D and I expect that we'll start to see some really amazing products and services coming out of this in the coming decades.
I know IBM and Watson often get a bad rap here. But IBM made the same exact move almost 3 years ago with the creation of the Watson unit. There is way too much PR around Watson, but IBM should be credited for having called the current round of AI investments way before anybody else.<p>Disclosure: I was part of the initial IBM Watson team, left recently.
I love the fact that it took Ballmer to leave for them to really get serious about cutting-edge tech again. Bill Gates are probably wishing he never met Ballmer at Harvard.
So, Microsoft is going to put 5000 people
on applications of artificial intelligence
(AI). Likely they will also include
machine learning (ML).<p>IMHO, there is some value there.<p>But, IMHO they would be better off just
drawing all they can, including AI/ML but
much more from the QA section of research
libraries. There they will find oceans of
material, where in comparison AI/ML look
like farm ponds, in pure and applied math
as math but also operations research,
statistics, optimization, control theory,
applied probability, stochastic processes,
mathematical finance, mathematical parts
of high end electronic engineering, signal
processing, experimental design,
quantitative methods in business, and much
more.
Despite the general positive spin around it ("we did it as a learning project"), most people would agree that Tay was both a technical and a PR failure.<p>But the pattern does repeat: Microsoft releases an AI which fails. Tesla's autopilot cannot "see" white object on white background. Apparently, Google also had a crash which is recently being claimed as human error. My guess is that this list is not going to stop here.<p>Suppose I ask you to build me a teleporting machine. You try, and like the movie Spaceballs, my torso and up comes out aligned wrong. This is now declared part of the iterative learning process, except that the cost borne by the corporations for the failure is quite minuscule compared to the cost borne by the affected party (risk asymmetry).<p>So while people talk about the huge advancements in AI, shouldn't we be quite skeptical especially at this point? Since none of us have seen the alternate parallel universes, and considering<p>a) the resources being thrown at the problem<p>b) the risk asymmetry involved<p>c) the privacy intrusion involved in the data collection (you knew I would bring it up, didn't you?) and not to mention<p>d) the inability of anyone to demand any kind of transparency from these AI pioneers<p>I can as well ask, are we as a society paying too high a cost for this progress? Could we really not do any better than this?
I am intrigued by the mention of "Monthly Q&A" at the end of the email. Is this Satya Nadella's version of Google's TGIF meetings? If so I heartily approve.<p>I left Microsoft for Google in 2010 and TGIF Q&A was one of the things I appreciated the most about Google culture (despite the occasional screwball live question). I think any company could benefit from a similar tradition.
It's funny, the internet isn't old enough to find links to the CYC project in Austin that blew through hundreds of millions of DoD money in the 80's and early 90's.<p>What's old is new.
What is the current fascination with AI? It seems to be talked about everywhere right now. Yet "AI" in a core sense has been around since the 70's...
How do you really start a 5000 person division at once and expect to succeed?<p>I assume AI development is a niche field. And you would want smaller dedicated teams of brilliant researchers and practitioners focusing on a single problem.<p>I can't imagine the overhead in maintaining and operating such a large division. I hope they know what they are doing.
5000 more people working hard to make themselves obsolete. I wonder how long it will take those working on AI to figure out that they're doing to themselves and the rest of the IT industry what the IT industry has already done to many others. If and when they succeed we'll finally know the true meaning of the term disruption.
Prepare for Clippy 2.0!<p>(Joke; Microsoft Research are actually highly respected, it's just that like Xerox they have some trouble turning it into products)