Hi, author here! Some details on the model:<p>* Trained 17GB of code from the top 10,000 most popular Debian packages. The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates).<p>* I used the [Megatron-LM](<a href="https://github.com/NVIDIA/Megatron-LM" rel="nofollow">https://github.com/NVIDIA/Megatron-LM</a>) code for training. Training took about 1 month on 4x RTX8000 GPUs.<p>* You can download the trained model here: <a href="https://moyix.net/~moyix/csrc_final.zip" rel="nofollow">https://moyix.net/~moyix/csrc_final.zip</a> and the dataset/BPE vocab here: <a href="https://moyix.net/~moyix/csrc_dataset_large.json.gz" rel="nofollow">https://moyix.net/~moyix/csrc_dataset_large.json.gz</a> <a href="https://moyix.net/~moyix/csrc_vocab_large.zip" rel="nofollow">https://moyix.net/~moyix/csrc_vocab_large.zip</a><p>Happy to answer any questions!
I got a function that assigned the same expression to three variables. Then it declared a void function with documentation stating "returns true on success, false otherwise". Apparently that code was written by a human, which makes me either doubt the correctness of that website, or the quality of the code it was fed with
This looks like overfitting to me. Some of the GPT samples were definitely real code, or largely real code. One looked like something from Xorg, another like it was straight from the COLLADA SDK. It’s really hard to define what “truly new code” is, if it’s just the same code copy pasted in different order. Blah blah Ship of Theseus etc.
There was some code about TIFF headers, and it was apparently GPT2 generated<p>TIFF is a real thing, so some human was involved in some part of that code, it has just been garbled up by GPT2... In other words, the training set is showing quite visibly in the result
Would be nice if the back button worked so you could see what you guessed wrong. This is a good example of where POST is used unnecessarily and URLs should be idempotent.
For the ones that were just part of the header file, listing a bunch of instance variables and function names, it seems impossible. But for the actual code, it is possible but still quite difficult, though I spent too long in finding some logical inconsistency that gave it away.
This was so much harder than I thought it was going to be. I would get a few right and then be absolutely sure of the next one and be wrong. After a while I felt like I was noticing more aesthetic differences between the gpt and real, rather than distinguishing between the two based on their content. Very interesting...
I wonder how likely are code invariants found in the training set preserved in GPT-2/3. In other words, if I train GPT-2 with C source produce by Csmith (a program generator which I believe produces C programs without undefined behaviour) would programs produced by GPT-2/3 also do not exhibit undefined behaviour?<p>I understand that GPT-2/3 is just a very smart parrot that has no semantic knowledge of what it is outputting. Like I guess let's take a very dumb markov chain that was "trained" on the following input:<p>a.c
```
int array[6];
array[5] = 1;
```<p>b.c<p>```
int array[4];
array[3] = 2;
```<p>I guess a markov chain could theoretically produce the following code:<p>out.c<p>```
int array[4];
array[5] = 1;
```<p>which is undefined behaviour. But it is produced from two programs where no undefined behaviour is present. A better question would be, how can we guarantee that certain program invariants (like lack of undefined behaviour) can be preserved in the produced code? Or if there are no guarantees, can we calculate a probability? (Sorry, not an expert on machine learning, just excited about a potentially new way to fuzz programs. Technically, one could just instrument C programs with sanitizers and produce backwards static slices from the C sanitizer instrumentation to the beginning of the program and you get a sample of a program without undefined behaviour... so there is already the potential for some training set to be expanded beyond what Csmith can provide.)<p>EDIT: I don't know how to format here...
This was a fun exercise, definitely think this could be difficult to suss out for greener devs or even more experienced ones. It’d be hilarious to have this model power a live screensaver in lieu of actually being busy at times.
I was confused on most examples I got because they started in the middle of a block comment. That's clearly wrong, but was it an artefact of the presentation rather than the generation?
It's easy after a while... all the terribly written code is human made and the clean tidy code is GPT2, I for one welcome our new programming overlords.
This is actually quite impressive. Try reading the comments in the code. The comments often make perfect sense in the local context even if it’s GPT-2 gibberish.<p>The real examples have worse comments at times.<p>The only flaw is that it shows fake code most of the time so you can game it that way.
A lot of the real code seems superficially nonsensical as well!<p>Functions with lots of arguments while the body consists of "return true;"<p>I guess it tells what AI often tells us about ourselves: That what we do makes much less sense than we think it does. It is thus easy to fake.<p>How is it possible to churn out so much music or so many books, or so much software? Well, because most creative works are either not very original or are quite procedural or random.<p>And this kind of work could be automated indeed (or examined if it needs to be done in the first place).
I found this impressively hard at first glance. It just goes to show how difficult getting into context is in an unfamiliar codebase. I think with any amount of knowledge of anything allegedly involved (or, you know, a compiler), these examples would fall apart, but it's still an achievement.<p>I'm also pretty sure there are formatting, commenting, and in-string-text "tells" that indicate whether something is GPT2 reliably. Maybe I should try training an AI to figure that out...
I was always able to correctly identify GPT2 but on a few occasions, I misidentified human-written code as being written by GPT2. Usually when the code was poorly written or the comments were unclear.<p>GPT2's code looks like correct code at a glance but when you try to understand what it's doing, that's when you understand that it could not have been written by a human.<p>It's similar to the articles produced by GPT3; they have the right form but no substance.
It's fascinating that the main reason this is hard is that some of the human code is bad enough that it's hard to believe it's not GPT-2 output. (The first time this happened for me, I had to look it up to convince myself it's really human code.)<p>It reminds me of how GPT-3 is good at producing a certain sort of bad writing.<p>My guess as to why this happens: we humans have abilities of logical reasoning, clarity, and purposefulness that GPT doesn't have. When we use those abilities, we produce writing and code that GPT can't match. If we don't, though, our output isn't much better than GPT's.
I got 4/4 GPT-2 guesses right. It is impressive but the "tell" I've found so far is just poor structure in the logic of how something is arranged. For example: a bunch of `if` statements in sequence without any `else` clauses with some directly opposing prior clauses. Another example was repeating the same operation a few times in individual lines of code which most human programmers would write in a simpler manner.<p>It's harder to do with some of the smaller excerpts though, and I'm sure there are probably examples of terrible human programmers who write worse code than GPT-2.
The two factors that seemed like dead giveaways were comments that didn't relate to the code, and sequences of repetition with minor or no variations.
This is difficult... because these models are just regurgitating after training on real code. Fun little site but I hope nobody reads too much into this.
The giveaway seems to be the comments. Some comments a computer could obviously generate and some are obviously written by humans. The code seemed to be irrelevant in making a determination in my playing around with it.
I may have delusions of grandeur because I mainly work on compilers and libraries rather than business code but there is some diabolically awful code in this training set
The goal when writing code is to be pretty machine like and to keep things extremely simple. People also write dead or off topic comments. That’s why this is so hard