This is nice work, but anyone wanting to try it for themselves should be warned that you shouldn't unpickle data received from an untrusted source.<p><a href="https://blog.nelhage.com/2011/03/exploiting-pickle/" rel="nofollow">https://blog.nelhage.com/2011/03/exploiting-pickle/</a>
Was the track changed at all during the training? I'm wondering if there's some subtle overfitting here where the car learned to drive along only this specific track. It mentions this but I'm not sure what concrete actions were taken to avoid overfitting:<p>> The biggest problem I ran into was over fitting the model so that it would not work in evenlly slightly different scenarios.<p>Regardless, a very cool project.
Great summary, I always think it's best when machine learning projects have visuals and videos to showcase what is actually being learned.<p>This simple project is a good example of supervised learning from what I can tell - the network will learn to steer "as good as" the human that provides the training data. For a different (and more complex) flavor of algorithm, check out reinforcement learning, where the "agent" (computer system) can actually learn to outperform humans. Stanford's autonomous helicopters always come to mind - <a href="http://heli.stanford.edu/" rel="nofollow">http://heli.stanford.edu/</a>
Consider the fairly massive changes to the competitive landscape ushered in by the <i>combined</i> factors of self-driving and electric vehicles:<p>- For liability reasons, most of the algorithmic IP will likely be open sourced. Either because it's required by regulators or because it's the most efficient way for car makers to socialize risk of an algorithmic failure.<p>- Electric vehicles have many fewer moving parts, which means that the remaining parts are likely to be converged upon by the industry and used widely. This breaks a lot of platform-dependency issues and allows for the commoditization of parts like motors. As these become standardized and commoditized, and easily comparable on the basis of size, torque, and efficiency, there will be virtually no benefit to carmakers to manufacture their own. The same applies to aluminum monocoque frames, charging circuitry, etc.<p>Tesla currently differentiates its models based on how many motors and what size batteries, but beyond that it's mostly just cabin shape, along with new innovations like the hepa filter cabin air cleansing which will likely be a standard part of all future models.<p>- Battery tech works the same way as motors, with little competitive advantage to be gained by automakers, especially since most of the IP in this area is already spoken for.<p>Compare the number of patentable parts in a model T vs a 1998 Taurus vs a 2017 internal combustion vehicle vs a Telsa. Tesla is one innovator, and GM has already likely patented many inventions relating to EV technology back in the original Chevy Volt era.<p>All this is why Tesla acquired SolarCity and is attempting to make an infrastructure play rather than a technology play. Only due to Musk's rare ability to self-finance big risks is this even possible, since infrastructure moonshots featuring $30K+ hardware units are hard to fund.
Not to put down the OP's work (I think it's a great project), but I'm just wondering what advantages might an ML approach have over "traditional" CV algorithms. In a really well controlled environment lanes will be easy to detect, and computing the difference between the current heading and lane direction should be doable; maybe if we're talking about complex outdoor environments and poor sensors then ML would have an advantage? Or if we're teaching the robot what the concept of a lane is?<p>I think back to the days when I basically implemented lane following with an array of photo resistors, an Arduino, a shitty robot made from Vex parts and some c code. The problem is much simpler than the one presented in this article, but then the computational resource used was order of magnitudes less. At what point then, do you decide that "OK I think the complexity and nature of the problem warrants the use of ML" or "Hmmm I think neural network is an overkill here"?
I updated this post with some of the great feedback from the comments. Also I just ported the algo used by the last DIYRobocar race winner, CompoundEye. Here's that post: <a href="https://wroscoe.github.io/compound-eye-autopilot.html#compound-eye-autopilot" rel="nofollow">https://wroscoe.github.io/compound-eye-autopilot.html#compou...</a><p>Thanks!
Nicely done! But I'm assuming that this is more of an exercise rather than a real-world application of ML? I say this because the task of keeping a car between two lines is trivially done using control algorithms. Of course, the CV part -- "seeing" the lines -- requires some form of ML to work in the real world.
I might be missing it, but I don't see instructions for installing TensorFlow/Keras on the Raspberry Pi in the Donkey repo or in this blog post (needed to actually run the trained model, it looks like). For TensorFlow, there are pre-built binaries and instructions to build from source here:<p><a href="https://github.com/samjabrahams/tensorflow-on-raspberry-pi" rel="nofollow">https://github.com/samjabrahams/tensorflow-on-raspberry-pi</a><p>Note: I am the owner of this repo
Two major errors: 1) This doesn't seem to be controlling overfitting on the right validation set. 2) There isn't a test set at all (separate from validation).<p>Using Keras' "validation_split" parameter will just randomly select a validation set. This is not the right thing to do when your data is image <i>sequences</i>, because you will get essentially identical data in training and validation.<p>Because of this, the numbers/plot here might as well be training accuracy numbers.
Apologies if I'm being stupid, but I can't find the details on how to physically connect the hardware together anywhere. Is this still on the todo list? I'm interested in applying this tutorial and making an autonomous RC car.
What I would love to see is an end to end neural network soln. On one end camera input comes through, on the other outputs for speed and steering angle.<p>But rather than a black box, it's explainable what the different layers are doing. If neural nets are turing machines then we should be able to compile some parts of the net from code.<p>Then the net is a library of layers. Some Layers trained with back prop, some compiled from code.
X is in the range 0, 255. They don't show code converting it to a much saner range for the network they've chosen. Is the full source somewhere?