There is a huge potential for language models to get close to messy text problems (many of which are in Excel and Sheet). I am the founder of promptloop.com - the author of this tweet has been an early user.<p>The challenge to making something like this, or Co-pilot / Ghostwrite, work well is about meeting users where they are. Spreadsheet users dont want to deal with API keys or know what temperature is - but anyone (like this tweet) can set up direct API use with generic models in 10 minutes. This document has all the code to do so ;). [1]<p>For non-engineers - or folks who need a reliable and familiar syntax to use at scale and across their org - promptloop [2] is the best way to do that. All comments in here are great though. We have been live with users since the summer - no waitlist. And as a note - despite the name "prompt engineering" has almost nothing to do with making this work at scale.<p>[1] <a href="https://docs.google.com/spreadsheets/d/1lSpiz2dIswCXGIQfE69dtH_VgdOFxjEBOK1WpT5uh8k/template/preview" rel="nofollow">https://docs.google.com/spreadsheets/d/1lSpiz2dIswCXGIQfE69d...</a>
[2] <a href="https://www.promptloop.com/" rel="nofollow">https://www.promptloop.com/</a>
The most sensible use for AI that I can see at this time is for supporting humans in their work, but <i>only</i> where the system is set up so that the human has to do the work first, with the AI system looking for possible errors. For example the human drives the car, and the AI brakes when it senses dangerous conditions ahead, or the human screens test results for evidence of cancer and the AI flags where it disagrees so that the human might take another look. The opposite scenario with AI doing the work and humans checking for errors as is the case here will lead to humans being over reliant on less than perfect systems and producing outcomes with high rates of error. As AI improves and gains trust in a field, it can then replace the human. But this trust has to come from evidence of AI superiority over the long term, not from companies over-selling the reliability of their AI.
The ability of language models to do zero-shot tasks like this is cool and all, but there is no way you should actually be doing something like this on data you care about. Like think about how much compute is going into trying to autofill a handful of zip codes, and you're still getting a bunch of them wrong.
Now wait for =deep_dream() or maybe =stable_diffusion() as a graph-generating function! (Graphs plotted with this function will of course zoom in infinitely but the further you go the more eyes and shiba dogs you'll notice in the corners ...)
Also check out <a href="https://usedouble.com" rel="nofollow">https://usedouble.com</a> (YC W23) if you're interested in using something like this today.<p>Note: I'm the founder :) Happy to answer any questions.<p>Reply below with some sample data/problem and I'll reply with a demo to see if we can solve it out of the box!
Do I understand that correctly? When I have to create a spreadsheet like this, there are 2 options. Option 1 I write a table zipcode to state and use this table to generate my column. If I carefully check my table my spreadsheet would be okay. Option 2 I ask GPT3 to do my work. But I have to check the whole spreadsheet for errors.
This seems to be doing much worse than existing solutions: Google Maps probably wouldn't have gotten quite as many wrong if you just pasted those addresses into the search bar. However it could be interesting as a last shot if parsing the input failed using any other way.<p>"I tried parsing your messy input. Here's what I came up with. Please make sure it's correct then proceed with the checkout."
Of all the places spreadsheet is probably the one place you don’t want AI generated content. Half the time it’s financial info so sorta correct simply isn’t good enough
I said it before: we need Copilot flash fill. Infer what the user wants the output to be from patterns and labels, so they can enter a few examples and then “extend” and automatically do the equivalent of a complex formula. e.g.<p><pre><code> Formal | Informal
Lane, Thomas | Tommy Lane
Brooks, Sarah | Sarah Brooks
Yun, Christopher |
Doe, Kaitlyn |
Styles, Chris |
…
</code></pre>
Automating something like this is extremely hard with an algorithm and extremely easy with ML. Even better, many people who use spreadsheets aren’t very familiar with coding and software, so they do things manually even in cases where the formula is simple.
also previously from 2020 <a href="https://twitter.com/pavtalk/status/1285410751092416513?s=20&t=ppZhNO_OuQmXkjHQ7dl4wg" rel="nofollow">https://twitter.com/pavtalk/status/1285410751092416513?s=20&...</a>
GPT3 charges for every token read/written. What may be more useful is using GPT-3 not to manually run itself on every row, but to take the task and generate a sufficient function that fulfills the task.
The tasks on the first sheet is easily accomplished by flash fill in MS Excel and I suspect less prone to error. Not sure why flash fill is not more popular
This is terrific stuff, honestly. I could see an Airtable integration being really quite useful. There were lots of times when I will run some quick scraping, some cleaning up via an Upworker, and then join against something else.<p>Here volume matters, and all misses are just lost data which I'm fine with. The general purpose nature of the tool makes it tremendous. There was a time when I would have easily paid $0.05 / query for this. The only problem with the spreadsheet setting is that I don't want it to repeatedly execute and charge me so I'll be forced to use `=GPT3()` and then copy-paste "as values" back into the same place which is annoying.
if you need that address parser, this is a bit more robust and easier to use: <a href="https://workspace.google.com/u/0/marketplace/app/parserator_parse_and_split_addresses/945974620840" rel="nofollow">https://workspace.google.com/u/0/marketplace/app/parserator_...</a>
I would love to see a tool which uses GPT-3 to generate SQL from English.<p>Like: give me a list of all customers from London who purchased in January a laptop with more than 16GB of RAM and used a coupon between 10% and 25%. Sort it by price payd.
The amount of 90% sensible, 10% ridiculously wrong computer generated crap we’re about to send into real humans’ brains makes my head spin. There’s truly an awful AI Winter ahead and it consists of spending a substantial amount of your best brain cycles on figuring out whether a real person wrote that thing to you (and it’s worth figuring out what they meant in case of some weird wording) or it was a computer generated fucking thank you note.