I really think the concerns about bad code are overblown. I had a friend in college who, for a CS class's final project, wrote an entire game in Java within a single, enormous function body. I still don't know how he even managed to do it, but it basically worked and he passed the class. He sort of understood how functions worked, but he found them confusing, so he didn't use them.<p>This wasn't at some community college or anything. This was at Georgia Tech.<p>That's an extreme case, and I certainly am not saying that just because it happened in a respected engineering school, that makes it acceptable. But my point is that in entry level courses (like the ones where you'd be implementing a clip function), even professors grade on getting the job done. Code quality just doesn't enter the picture at that level.<p>The thing is, trying to teach good code directly is pointless. Your less bright students will accept the dogma and never actually understand how to apply it usefully. Your brightest students will see it as a bunch of useless bullshit that's holding them back.<p>If you want to teach good code, here's how you do it: Make a student write and maintain a large project. Make them <i>keep it running</i> for two years, while you make them add more and more features. Keep checking it against an automated test suite which they do not have access to, and grade them on its correctness. Give them the resources to learn about best practices, but never tell them they have to use them.<p>Then, at the end of two years, let them rewrite it from scratch. <i>Then</i> you will see a student who has learned the value of good coding practices.
The author completely misses here - probably due to limited exposure to real-life third-party code in real life production systems.<p>Code auto-grading, at least at Coursera, is usually done by running comprehensive unit tests, which extensively test border cases as well. These test suites are often 5-10 times larger than the actual submitted code, and it is difficult to imagine anybody outside of this type of environment spending so much extra time designing (and testing!) test suites with 100% coverage.<p>Moreover, code submissions have to comply with (or implement, in case of Java) predefined interfaces. And some courses (e.g. Scala) have style checker output taken into account (20% of grade is decided by the style checker in the Scala course).<p>In summary, well-thought-out test suites and interface specifications demand well-designed code submissions; in real life, poor comments or sloppy expressions are a very minor nuisance compared to poorly designed interfaces and forgotten border cases.
The automatic grading has a huge advantage: It is nearly real-time, and improving the solution and re-submitting improves your score.<p>Having been a teaching assistant who corrected programming assignments (and also a student), I always wondered how many of the students would read my comments, would go back to their solution and actually improve it. Probably none. If I (as a student) received a comment about a solution I submitted two weeks ago, I often didn't instantly know what the corrector talked about. I had to go back to look at my code. I'm not sure I always did that when I was busy. Additionally, I think even if I acknowledged the comment, I wouldn't actually go ahead and fix my solution.<p>I'm taking the scala course right now, and when I submit a solution and something is flagged, my thoughts are right in the code, I still have all the files open in vim, sbt running... so I can instantly go and fix it. And there is a real incentive to do that, because my score will improve.
This is nothing, compared to the "peer review" of the humanities lectures.<p>I know that there is no easy answer for doing MOOC (massive open online course) in humanities, but, according to the web, Coursera's solution is not working very well and, what is more striking to me, Coursera doesn't seem to respond.<p>But again, I have no easy solution for grading essays in MOOC.<p>More information here:<p><a href="http://courserafantasy.blogspot.cz/2012/09/done-more-or-less.html" rel="nofollow">http://courserafantasy.blogspot.cz/2012/09/done-more-or-less...</a><p><a href="http://www.insidehighered.com/blogs/hack-higher-education/problems-peer-grading-coursera" rel="nofollow">http://www.insidehighered.com/blogs/hack-higher-education/pr...</a><p><a href="http://gregorulm.com/a-critical-view-on-courseras-peer-review-process/" rel="nofollow">http://gregorulm.com/a-critical-view-on-courseras-peer-revie...</a>
I'm making <a href="http://codehs.com" rel="nofollow">http://codehs.com</a> to teach beginners how to code. We're focusing on high schoolers and promoting good style and good practices.<p>We have a mixture of an autograder for functionality and human grading for style.<p>It's really important to get both. Our class uses a mastery model rather than grades, so you shouldn't move on until you've mastered an exercise, and mastery does not just stop at functionality. Style is included.<p>Making your code readable to other people is really important, and it can and should be taught and stressed even on small exercises.<p>At Stanford, code quality is half your grade in the first two intro classes because it's just as important that someone else understand your code as it is to just make it work.
I disagree with the article in general because I think the secret sauce for these online classes is involving students with non-graded questions during the lectures, graded tests, and homework.<p>I think the comprehensive grading of programs submitted for homework is good, but even if it is not perfect, in the 5 classes I have taken, the assignments helped me dig into the material.<p>I also like the model of letting students take graded quizzes more than once. I find that the time spent between the first and second time taking a quiz is very productive for improving my understanding of the material.<p>These classes are fundamentally superior to just reading through a good text book.
What the article saying isn't specified to MOOC, i.e. think about continuous integration vs code review - they are not contradictory.<p>MOOC is not going to replace formal education and I think the "limitations" mentioned are perfectly acceptable due to the issues of costs and incentives involved, e.g. In the Coursera's Scala course, there are more than 10K+ weekly assignment submissions, you must need a scalable assessment method. (The grader is not bad in fact, i.e. knows cyclomatic complexity, warn if you use mutable collections etc)
I'm taking 6.00x and Udacity CS101 currently, and I'd have to disagree with the OP.<p>The code checkers give you immediate feedback with test suites that are more comprehensive than what students would (or could, in most cases) design themselves.<p>sure there's no professorial feedback on your code, but 90% of the time those comments you receive back on your printed out code will go unread. Not to mention the lead time, often as long as two weeks, from the time you submit to the time you receive back comments, often makes the comments worthless.<p>as for style, my Uni Intro to CS courses didn't check my style either. I find 6.00x and CS101 to be vastly superior in almost every respect.<p>finally, 6.00x and CS101 actually provide you with the "correct" answers after you've passed their tests with an adequate solution. I've a few times found myself hitting my head and thinking, "Why didn't I think of that! That's more elegant than my solution.", and going back and attempting to implement their solution. Try finding that in anything other than an online course.
the scala coursera course does a style check which will catch some style issues. i think it uses this: <a href="http://www.scalastyle.org/rules-0.1.0.html" rel="nofollow">http://www.scalastyle.org/rules-0.1.0.html</a> but it wouldn't have caught the clip problem discussed on the blog.
As a part of the build system at my work, we run various passes over the code aside from just "does this compile". I'm sure these MOOCs could find software to:<p>1. Check style of a language<p>2. Run a comprehensive suite of unit tests<p>3. Static analysis of the code<p>These tools together can catch most problems of bad formatting, fragile code (cannot handle edge cases, errors, etc.), and structural errors. Additionally you could take into account some kind of performance of the code - does this solve this problem in a reasonable amount of time?<p>By using standard industry tools, one could do a good grading system that is entirely automated.