As a researcher in the field I am not quite sure how I feel about these kind of resources. I am all for making research accessible to a wider audience and I believe that you don't need a PhD, or any degree, to do meaningful work.<p>At the same time, the low barrier of entry and hype has resulted in a huge amount of people downloading Keras, copying a bunch of code, tuning a few parameters, and then putting their result on arXiv so they can put AI research on their resume. This has resulted in so much noise and low quality work that it really hurts the field.<p>You don't need a degree, but I think you do need to spend some time to get a deep enough understanding of what's going on under the hood, which often includes some math and takes time. This can be made accessible and there are plenty of good resources for that. But all these "become an AI pro by looking at some visualizations and copying this code" is maybe hurting more than it helps because it gives the illusion of understanding when it's actually not there. I wouldn't want people learning (solely) from this touching my production systems, writing blogs, or putting papers on arXiv.
I'm doing deep learning without a data science background.<p>Some of my current results are:<p>* <a href="https://vo.codes" rel="nofollow">https://vo.codes</a><p>* <a href="https://trumped.com" rel="nofollow">https://trumped.com</a><p>The voices need better data curation and longer training, but some speakers such as David Attenborough are quite good.<p>I've also built a real time streaming voice conversion system. I want to generalize it better so that it can be an actual product. I think it could be a killer app for Discord. Imagine talking to your friends as Ben Stein or Ninja.<p>I've been watching TTS and VC evolve over the last few years, and it's incredible pace at which things are coming along. There are now singing neural networks that sound better than Vocaloid. If you follow researchers on Github (seriously, their social features are a killer app!), you'll see model after model get uploaded - complete with citations, results. It's super exciting, and it's the future I hoped research would become.<p>If you're diving into this, I would recommend using PyTorch, <i>not</i> TensorFlow. PyTorch is much easier to use and has better library/language support. TorchScript / JIT is really fantastic, too. I even mean this if just you're poking around with someone else's model - find a PyTorch alternative if you can. It's much easier to wrap your head around. TensorFlow is just too obtuse for no good reason.
Tangentially related (and also using the ubiquitous MNIST dataset), Sebastian Lague started a brilliant, but unfortunately unfinished video series on building neural networks from scratch.<p>This video was an absolute eye-opener me [1] on what classification is, how it works and why a non-linear activation function is required. I probably learned more in the 5 minutes watching this than doing multiple Coursera courses on the subject.<p>[1] <a href="https://www.youtube.com/watch?v=bVQUSndDllU" rel="nofollow">https://www.youtube.com/watch?v=bVQUSndDllU</a>
The whole AI/ML stuff has become so hyped up that its probably time to find another topic of interest in software engineering for me. Its a weird melange nowadays where frameworks and "academic credentials" are fused together by major tech companies and leaves me - who has deployed a dozen of classical ML models into production that are still running after couple of years - wondering what this is all about.<p>Overall, working with people with different backgrounds, a ML-related PhD is usually not correlated nor anti-correlated to these people having a good understanding of the relationship between models and and their applications.<p>I wish we could leave the framework and name-dropping behind and talk more about what it takes to evaluate predictions, how to cope with biases, etc.
It's a pity that most Tensorflow tutorials out there seem to deal with images. We tried to use it for real-time data classification (data -> [yes | no]). Every tuturial out there seems to assume you're using Python (which is probably not an invalid assumption). Here's my 2c when trying to use Tensorflow with C++:<p>a) Loading SavedModels is a pain. I has to trawl the Tensorflow repo and Python wrappers to see how it worked.<p>b) It's incredibly slow. It added ~250ms to our latency. We had to drop it.<p>c) It has a C++ framework that doesn't work out-of-the-box, you have to use the C lib that wraps an old version of the C++ framework (confused? me too).<p>d) It's locked to C++03.<p>Tensorflow-Lite looked to fit the bill for us, but our model weren't convertible to it. We no longer use Tensorflow.
This is a nice introduction, even though as most of the tutorials on ML it goes from 0 to 100 in 2 lessons.<p>A couple years ago I started studying ML, and I have a design background, so I needed to digest all the math and concepts slowly in order to understand them properly.<p>Now I think I understand most of the fundamental concepts, and I've been using it quite a lot for creative applications and teaching, and I have to say the best resource I've found for beginners, by far, is "Make Your Own Neural Network" by Tariq Rashid.<p>It starts really from the beginning and it takes you through all the steps of building a NN from zero, with no previous knowledge. really good.
Since everyone is talking about hype in ML, I wish there was some hype for good ole' conversional scientific computing. Yes, it's not so sexy, you have to build your own model yourself, and then the hard work is in finding and verifying a suitable numerical method and finally devising a solid implementation. It requires a vast number of different skills, anything from pure math to low level programming and it is definitely not trivial work, but it does not seem like it pays that well.
I am constantly puzzled by people saying that AI is overhyped and fresh grads won't have enough jobs for them. Almost every real life industry: retail, logistics, construction, farming, heavy industries, mining, medicine have just recently started to try AI. The amount of manual and suboptimal tasks that have to be automated and optimized is enormous. I am pretty sure there is more the enough work for applied DSs with domain knowledge in mentioned industries.
This is very well done, hitting on some pain points and explaining how to work around them.<p>I have devoted close to 100% of my paid working time on deep learning for the last six years (most recently managing a deep learning team) and not only has the technology advanced rapidly but the online learning resources have kept pace.<p>A personal issue: after seven years of not updating my Java AI book, I am taking advantage of free time at home to do a major update. New material on deep learning was the most difficult change because there are so many great resources, and there is only so much you can do in one long chapter. I ended up deciding to do just two DL4J examples and then spending most of the chapter just on advice.<p>The field of deep learning is getting saturated. Recently I did a free mentoring session with someone with a very good educational background (PhD from MIT) and we were talking about needing specializations and specific skills, just using things like DL, cloud dev ops, etc. as necessary tools, but not always enough to base a career on.<p>Definitely it can help peoples’ careers working through great online DL material, but great careers are usually made by having good expertise in two or three areas. Learn DL but combine that with other skills and domain knowledge.
I attended a conference talk of a FB AI-engineer talking about her paper with backprop equations so obviously wrong my eyes hurt, and incorrect definitions of objects. It did not stop her from participating (btw. this is always unclear -- who did what) in state-of-the art research in object detection.<p>PhD is overrated in the deep learning context. It is more about forging the intellectual resilience and ability to pursue ideas for months/years than learning useful things/tricks/theorems.
Twenty five years ago, this would have been "LINUX, UNIX, and serving, without a PhD" and Matt Welsh's Linux Installation And Getting Started was the intro (<a href="https://www.mdw.la/papers/linux-getting-started.pdf" rel="nofollow">https://www.mdw.la/papers/linux-getting-started.pdf</a>). I was one of many who adopted Linux early, using this book (later I read the BSD Unix Design and Implementation, which I would describe as senior undergrad/junior grad student material).<p>Having those sorts of resources to introduce junior folks to advanced concepts are really great to me- my experience is that I learn a lot more by reading a good tutorial than a theory book, up until I need to do advanced work (this is particular to my style of learning; I can read code that implements math, but struggle to parse math symbology).
The video version [1] is also pretty awesome, though its code itself is a bit outdated now.
Explains a lot of very practical issues that you might not find in most academic textbooks, but you encounter every day in practice.<p>[1] <a href="https://www.youtube.com/watch?v=vq2nnJ4g6N0" rel="nofollow">https://www.youtube.com/watch?v=vq2nnJ4g6N0</a>
Have a simple rant here. All these BIG $ companies every now and then come out with statements and what not, that doing AI ML is very easy and every one including their cats should do AI, ML courses and training(preferably on their platform). Once that is done the job market is yours. Reality is far from this.<p>- Today AI|ML does not have the capability marketed by these big companies. Incidentally marketing is targeted at governments, big non tech companies and gullible undergraduates.<p>- Undergrads many a times take these training courses wherein they acquire the skill set to call these APIs and flood the job market wherein a data-entry or data-analyst job is tagged as AI|ML job.<p>- High paying jobs in AI|ML still require a Masters or PhD or a mathematical background.<p>In conclusion, the current hype around AI|ML is misguiding gullible undergrads and governments(I dont mind the government being cheated THOUGH).
The title is obviously clickbait-y, but it’s fine: they’re trying to sell a product (Google Colab).<p>IMO if you’re interested in AI research or ML engineering, you already know that — in order to avoid getting people killed - you have to understand how it works under-the-hood. You’re doing yourself, your employer and your fellow humans a favour.<p>Just keep up the good work, and ignore the bullshit. If an AI winter comes, you’ll be well prepared to migrate to another engineering role.
I wonder if Google is using this resource to train their own staff without PhD and after that allow them to work as ML engineers? That would provide credibility to such program - instead it is more aimed to sell more ML computing power to the masses (who won't really understand how to use it to get meaningful results).
As usual with tools, it’s the use instead of the instrument. Domain knowledge still has advantages against generalism, in applied and critical fields. I mean, you can’t do medicine or materials yet with AI/ML without understanding the domain?
This is something I did with fast.ai - <a href="https://deeplearningmantra.com/" rel="nofollow">https://deeplearningmantra.com/</a>
Is this tutorial aiming at entry level? Those graphs are quite difficult. I guess it will take a lot of time to do homework on some fundamental curriculums.
It is totally possible and easy to use tensorflow/torch if you havn't skipped linear algebra classes. Ph.D is needed if you are going for a job where you design a sophisticated model (not just adding layers, but experimenting with activation, attention etc).