I have been testing out the ChatGPT and the Azure/Git Copilot solutions and I am impressed by the work done there over a short period of time.
But I can't keep wondering if this:
We have created programming languages to be able to give instructions to a computer in a way that human beings can use these. Now we are learning computers this human language so it can propose coding.
However if we make computers understand our normal language (we did that), wouldn't it be much more efficient if we let this computer create computer language directly? It will be much more efficient for the AI, it will enable much greater options since we are not restricted by the programming language, and it will have much less overhead.
Now I understand this can provide problems if we ever need to change or review the code manually, but realistically: Would we?
The crux of the issue is at this point in time, a human still needs to validate what an AI generates, that means it needs to generate the code in a human readable format (i.e a programming language)
I think ChaptGPT is not all the way there yet but a very awesome proof of concept, what can be done with NLP alone.<p>You can use it for all kinds of roadmaps and features that have been done before, but the same has always been true for "frameworks".<p>I think the next iteration will be an AI Model connected to an actual relational database (yes a classic one) to store factual information and to have it "understand" things by relations.
Technically its always been possible to use world scale databases with unfathomable amount of information spread out across continents and that we as human agents are also simply "linking" relational info to each other. AI will be much better in automatically creating atomic database tables and queries because it understands them at another level than we do. This can also done with "trust scoring" on where the information comes from, how often you heard it from which source with another trust score and so on and so on.<p>This is why i am excited about the openai microsoft partnership, because it will mean that we will get to see OUR data incorporated into high end models. That is for me when the beta of chatgpt really begins.<p>And what this has to do with coding:<p>You can replicate amazing software and create novel software too, but right now its all a good luck game where you might just figure out that half the functions you where supposed to import, do not exist. My believe this is because it does not actually understand code, but translates it to a human readable format and writes a story for you which is then translated back, based on probability.<p>If it would know how a kubernetes service and deployment actually relate together and which properties of one influence the other, that's when this is going to reach amazing levels =)
I think there’s a fundamental limit to the degree of correctness an AI can achieve, without it actually being embedded in the business domain. So long as that domain exists in meatspace, there will always be business requirements beyond the reach of even the most sophisticated AI coder.<p>That means humans need to be in the loop, and programming languages are the best abstractions we’ve yet created for unambiguously translating business requirements into executable logic. Seems to me like that remains every bit as necessary.
> if we let this computer create computer language directly? It will be much more efficient for the AI, it will enable much greater options since we are not restricted by the programming language, and it will have much less overhead.<p>I'd argue that it is likely significantly harder for current models due to the sheer number of instructions involved in machine code which is an issue for LLMs due to the their limited window / span via tokens.<p>What we are seeing with chatGPT is that the model is actually inventing its own abstractions, which - imho - suggests that going up in abstractions instead of down will enable higher productivity for the models.<p>On one hand, LLMs are very versatile in what they can produce, but otoh that versatility results delusions. This is, imho, akin to alphaGo when it made that single error in the match with Lee Sedol.
I think this is the right track, it's better for it to specialize in programming languages we understand because ai hallucinates often.<p>We will have to correct it for years while leveraging it's speed for shell scripts, text files, and customized boilerplate, until it stops hallucinating.
"wouldn't it be much more efficient if we let this computer create computer language directly?"<p>Pretty sure we tried this, a few times now, got scared at the results, and unplugged the power source.<p>Human in the loop. For humanity's sake. Please.
> However if we make computers understand our normal language (we did that)<p>Nope, we didn't, and therein lies the problem.<p>What we did do is make it appear as if it understands human language, but there are numerous examples across the web to show how it fundamentally does not have any <i>understanding</i> of what it's saying.
> (...) wouldn't it be much more efficient if we let this computer create computer language directly?<p>I am not an ML/AI engineer but this is most likely much easier said than done and probably more into "true AI" territory. (?)