In one of their examples, they note “They saw ratings hover around 60% with their original, in-house tech — this improved by 7-8% with GPT-2 — and is now in the 80-90% range with the API.”<p>Bloomberg reports the API is based on GPT-3 and “other language models”.<p>If that’s true, this is a big deal, and it epitomizes OpenAI’s namesake. The largest NLP models require vast corporate resources to train, let alone put into production. Offering the largest model ever trained (with near-Turing results for some tasks) is a democratization of technology that would otherwise have been restricted to well-funded organizations.<p>Although the devil will be in the details of pricing and performance, this is a step worthy of respect. And it bodes well for the future.
Concrete numbers from the various pullouts:<p>> They saw ratings hover around 60% with their original, in-house tech — this improved by 7-8% with GPT-2 — and is now in the 80-90% range with the API.<p>> The F1 score of its crisis classifier went up from .76 to .86, and the accuracy went up to 96%.<p>> With OpenAI, Algolia was able to answer complex natural language questions accurately 4x as often as it was using BERT.<p>I think the most informative are the first two, but the most _important_ is the final comparison with BERT (a Google model). I am, uh, a little worried about how fast things will progress if language models go from a fun lil research problem to a killer app for your cloud platform. $10m per training run isn't much in the face of a $100bn gigatech R&D budget.
Since the demos on this page use zero-shot learning and the used model has a 2020-05-03 timestamp, that implies this API is using some form of GPT-3: <a href="https://news.ycombinator.com/item?id=23345379" rel="nofollow">https://news.ycombinator.com/item?id=23345379</a> (EDIT: the accompanying blog post confirms that: <a href="https://openai.com/blog/openai-api/" rel="nofollow">https://openai.com/blog/openai-api/</a> )<p>Recently, OpenAI set the GPT-3 GitHub repo to read-only: <a href="https://github.com/openai/gpt-3" rel="nofollow">https://github.com/openai/gpt-3</a><p>Taken together, this seems to imply that GPT-3 was more intended for a SaaS such as this, and it's less likely that it will be open-sourced like GPT-2 was.
Looks like OpenAI is going head to head with huggingface.<p>This makes a lot of sense and it seems they are telegraphing to monetize what they have been doing. It also seems like this is why they don't release their models in a timely manner.
Whoa -- Speech to bash commands? That's a pretty novel idea to me with my limited awareness of NLP. I could see this same idea in a lot of technical applications -- Provisioning cloud infrastructure, creating a database query.. Very cool!
OpenAI started off wide-eyed and idealistic but it made the mistake of taking on investors for a non-profit mission. A non-profit requires sponsors, not investors. Investors have a fiduciary responsibility to maximize profits, not achieve social missions of open AI for all.
I guess Sama plans on manufacturing growth metrics by forcing YC companies to pretend that they're using this.<p>Generic machine learning APIs are a shitty business to get into unless you plan on hiring a huge sales team and selling to dinosaurs or doing a ton of custom consulting work, which doesn't scale the way VCs like it to. Anybody who will have enough know how to use their API properly can jus grab an open source model and tune it on their own data.<p>If they plan on commercializing things they should focus on building real products.
OpenAI started as a non-profit, went for-profit. Still owned by the big players.... Something isn't right.<p>Is OpenAI just a submarine so the tech giants can do unethical research without taking blame??? Its textbook misdirection, nonprofit and "Open" in the name, hero-esque mission statement. How do you make the mental leap from "we're non-profit and we won't release things too dangerous" to "JK we're for-profit and now that GPT is good enough to use its for sale!!". You don't. This was the plan the whole time.<p>GPT and facial recognition used for shady shit? Blame OpenAI. Not the consortium of tech giants that directly own it. It may just be a conspiracy theory but something smells very rotten to me. Like OpenAI is a simple front so big names can dodge culpability for their research.
Why are there no live examples on the page. All I see is video presentations and some cached API response.<p>Is it a confidence problem? Are the OpenAI folks not confident on a single use case? Or did I miss the live demo somewhere?
Natural language search is approximately $100B business. This might be first AI application that changes the search landscape from 1990s and finally puts an end to the question “where is money in AI?”.
In NLP there is a very clear and powerful new paradigm: train a HUGE language model using vast amounts of raw text. Then to solve the problem of interest, either fine-tune the model by training on your specific dataset (usually quite small), or 0/1-shot the learning somehow.<p>The crucial question is : is this paradigm viable for OTHER types of data?<p>My hypothesis is YES. If you train a HUGE image model using vast quantities of raw images, you will then be able to REUSE that model to work for specific computer vision problems, either by fine-tuning or 0/1-shotting.<p>I'm especially optimistic that this paradigm will work for image streams from autonomous vehicles. Classic supervised learning has proved to be difficult if not impossible to get to work for AV vision, so the new paradigm could be a game-changer.
An API that will try to answer any natural language question is a mind blowing idea. This is a universal thinking interface more than an application programming one.
I just sent in a request to join the waiting list, for the company I work at, Kognity. The potential for this in the EdTech field is mindblowingly amazing!<p>There are a few good examples of educational help on the list but it's really only scratching the surface.<p>I'm really excited and hope Kognity and EdTech in general can use this for even more value-full (both for students and teachers) tasks soon.
OpenAI seems like a completely disingenuous organization. They have some of the best talent in Machine Learning, but the leadership seems completely clueless.<p>1) (on cluelessness) If Sama/GDB were as smart as they claim to be, would they not have realized it is impossible to run a non profit research lab which is effectively trying "to compete" with DeepMind.<p>2) (on disingenuity) The original openAI charter made OpenAI an organization that was trying to save the world from nefarious actors and uses of AI. Who were such users? To me it seemed like, entities with vastly superior compute resources who were using the latest AI technologies for presumably profit oriented goals. There are few organizations in the world like that, namely FAANG, and their international counterparts. Originally OpenAI sounded incredibly appealing to me, and a lot of us here. But if their leadership had more forethought, they would perhaps not have made this promise. But given the press, and the money they accrued, it has now become impossible to go back on this charter. So the only way to get themselves out of the whole they dug into was by making it into a for profit research lab. And by commercializing perhaps a more superior version of the tools Microsoft, Google and the other large AI organizations are commercializing, is OpenAI any different from them?<p>How do we know OpenAI will not be the bad actor that is going to abuse AI given their self interest?<p>All we have is their charter to go by. But given how they are constantly "re-inventing" their organizational structure, what grounds do we have to trust them?<p>Do we perhaps need a new Open OpenAI? One that we can actually trust? One that is actually transparent with their research process? One that actually releases their code, and papers and has no interest in commercializing that? Oh, that's right, we already have that -- research labs at AI focused schools like MIT, Stanford, BAIR and CMU.<p>I am quite wary of this organization, and I would encourage other HN readers to think more careful about what they are doing here.
What happened to working on AI for the good of humanity, including AGI, and making sure it didn’t fall into the hands of bad actors? Wasn’t that the original aspiration? Now this reads like next generation Intercom/Olark tools.
"OpenAI technology, just an HTTPS call away"<p>'an' is only mean to proceed a vowel. Should say<p>"OpenAI technology, just a HTTPS call away"
On a side note, has anyone noticed a lack of diversity on the group photo on their careers page: <a href="https://openai.com/content/images/2020/04/openai-offsite-july-2019.jpg" rel="nofollow">https://openai.com/content/images/2020/04/openai-offsite-jul...</a><p>I remember coming across it not too long ago and felt unwelcomed/disappointed.
This is what I submitted for beta list:<p>I want to create a software that can generate new code given business case hints, by studying existing open source code and their documentation.<p>I know this is vague, but sounds like what we eventually want for ourselves right?