> At the current state of my model the model basically just clones the human driver as good as possible. That means the the amount of brake is higher in curves<p>I read in another comment that you are still in high school, so maybe the above is because you do not have actual driving experience. But this is not how human drivers drive.<p>Human drivers brake <i>before</i> the curves, while usually accelerate during curves. This improves stability.<p>This is something you may want to consider for your next iterations. In any case, congratulations for your impressive work!
good start op. a good next step is add in temporal context (previous frame information) so model may resolve ambiguous cases. see e.g. breakdown of comma.ai openpilot model here (from their twitter, [0]), or the karpathy talk as well.<p>also, when presenting results, prefer to include a longer demo or sub-demos that show strengths and failure cases, and move to the top of the post rather than bottom imo. for a given reader, implementation details are either confusing and uninteresting (if not subject-matter expert) or predictable and uninteresting (if an expert, you've seen many similar before); audience for implementation details is very small number of people who want to sit down and replicate or check your work. but demos / analysis is always novel and interesting to anyone, so lead with that :)<p>[0] <a href="https://medium.com/@chengyao.shen/decoding-comma-ai-openpilot-the-driving-model-a1ad3b4a3612" rel="nofollow">https://medium.com/@chengyao.shen/decoding-comma-ai-openpilo...</a>
Seems really scary to exclusively use a neural network for safety-critical tasks like this, without having an explicit method guaranteeing safety.<p>You can't prove that a trained neural network is always correct, and thus this is likely going to kill someone at some point.<p>I think you definitely need a LIDAR (or in general something that can give an accurate 3D map of all surroundings) and some explicitly written code that can be proven to result in never hitting a car going the opposite direction in the opposite lane (provided they stay in their lane), as well as never hitting stationary obstacles and never going off the side of a mountainside road.
You do something similar in Udacity's SDC nanodegree but you use their simulator rather than a new car. Interestingly through trial and error on my project, ELU activation functions was the only thing that prevent vanishing gradient problems. You use the same activation function and I'm curious why you selected that one. I always wonder why it was the only function that worked for me.
Holy crap.<p>First: you're awesome. Keep it up.
Second: anyone else think this is a great example of how autonomous driving is a last-mile problem?
>A few days ago @karpathy presented their workflow with PyTorch and also said some numbers, to train the Autopilot system with all it neural networks you would have to spend 70,000 hours with a decent gpu - that is around 8 years (depending on which GPU you are using). In total the Autopilot is a system of 48 Neural Networks. When we compare this to what I will show you, you are gonna see that this is insane.<p>I'm very confident that it is not insane, for reasons that you have yet to discover, and the arrogance of calling insane inspires a fear in me considering what you appear to be doing on roads with other people.<p>Your project is very cool, but also very irresponsible. Are you using this on the road with other drivers? For the love of all things good, what are you thinking? Please clarify this. It's one thing to trust your own life to your creations, it's another thing to endanger everyone else's.
Neat project, at a minimum this should get you hired somewhere fancy. I'd love for all the self-driving car vendors to take a leaf out of your book and keep their software in-house until it agrees with what the real drivers do > 99.99% of the time and the other 0.01% led to an accident of sorts.