I wonder whose code this was trained on? I guess it's interesting that this is quite a popular search result on Github:<p><a href="https://github.com/search?q=plane+spotter+language%3AJavaScript&type=repositories&l=JavaScript">https://github.com/search?q=plane+spotter+language%3AJavaScr...</a>
This works better for some languages then others. I tried something similar for Swift since I wanted a simple tool for MacOS. The process was not great, many methods were hallucinated, and the answers matched old/deprecated functions from previous Swift versions. The experience seems to scale comparably to what's available in stackoverflow.
I'm constantly ill at-ease with how LLMs have been trained on, well, just about everyone's data, and spit out really creative responses that were wholly inspired by someone else's most likely copyrighted works. With no attribution.<p>This response in particular.<p>I immediately thought of this project (that hit the front page of HN):<p><a href="https://skybot.cam/about" rel="nofollow noreferrer">https://skybot.cam/about</a>
Coding LLMs really shine for greenfield projects in popular programming languages.<p>They don’t do half as well in large codebases that use ad-hoc frameworks. For example, it has no idea how to retrieve the currently-logged-in user object for a new endpoint you want it to build.<p>The solution there is to fine-tune it on your codebase, but that’s likely a few years away for the average LLM user.
One thing that bothers me with these posts is that the majority of the work goes into the API.<p>A more true title would be "Displaying nearby ADS-B data for planespotting in 120 secs...".
Next step... get it to map the ADS-B broadcast ICAO callsign and the IATA flight number. At the airport you'll see the IATA number but in the ADS-B you'll have ICAO. In his example it says:<p><pre><code> Flight TAP1369...
Flight TAP23NP...
Flight TAP1691...
Flight RZO134...
</code></pre>
TAP1369 (ICAO) is TP1369 (IATA); similar for TAP1691 (ICAO) and TP1691 (IATA). But TAP23NP (ICAO) is TP1823 (IATA) and RZO134 (ICAO) is S4 134 (IATA). What's broadcast in ADS-B is very often very different from what's shown at the airport and not what someone would recognize.<p>Doing this mapping is a pain. The OP is using OpenSky Network who say:<p><i>callsign<p>This column contains the callsign that was broadcast by the aircraft. Most airlines indicate the airline and the flight number in the callsign, but there is no unified system. In our example, the callsign indicates that this state vector belongs to UPS flight 858. By looking up the flightnumber on services like flightaware.com, you’ll find out that this flight goes from Lousville to Phoenix every day.</i><p>i.e. go look this data up elsewhere.<p>I asked ChatGPT:<p><i>Write me some JavaScript that can map an ICAO callsign broadcast by ADS-B to an IATA flight number for the same flight.</i><p>Mapping an ICAO callsign broadcast by ADS-B to an IATA flight number is a complex task that typically requires access to a comprehensive aviation database. Such databases are often provided by aviation data providers and may not be freely available. However, I can provide you with a simplified example of how you might approach this task if you had access to the necessary data.
This is my favorite part about LLM’s and image generators: they are the ultimate rapid prototyping tool. I really benefited from making it a habit to use them a lot for coding and anything else, as I started learning what works and what doesn’t and thus bootstrapped a whole set of tools for myself and my team.
This feels a bit like survivor's bias. Or perhaps more accurately akin to the file drawer problem in academia. This may have worked really quickly this time for this specific use case, but what about the vast number of times a prompt like this this won't have worked for other people? The other metaphor that comes to mind is monkeys and typewriters, although clearly here the odds are a bit better.
The insight about thinking deeply is compelling, and I believe it's partly true. We could all benefit from sitting silently and thinking more often.<p>But I also suspect that it overvalues individual insight and undervalues the normal process by which most complex ideas with application to the "real world" actually get created - through back-and-forth interaction between people.
> ChatGPT is great at executing, but it’s not so great at coming up with new ideas.<p>I’m not sure if I agree with the second half of that statement. Several times I’ve asked GPT-4 to suggest ideas, and it came up with some very good ones that I would not have thought of myself. Here’s an exchange I had with it just now:<p><a href="https://chat.openai.com/share/d23ce0d2-ed60-4259-9176-73b590cae164" rel="nofollow noreferrer">https://chat.openai.com/share/d23ce0d2-ed60-4259-9176-73b590...</a><p>I’m also not sure if I agree with the first half, particularly the “great” in “great at executing.” While I have also been amazed a few times by code it wrote for me, I have run into more cases where, despite repeated back-and-forths with me, it was never able to produce code that ran.<p>Maybe better, in my experience, would be “ChatGPT can often come up with good ideas, but it is only occasionally able to execute them.”
I have had a similar experience using ChatGPT to build <a href="https://meoweler.com" rel="nofollow noreferrer">https://meoweler.com</a> – a travel website covering 5000+ cities with a surprisingly good content quality created over a long weekend. Something that was unfathomable for me 2 years ago.
I have used ChatGPT to help me brainstorm and do quality checks on my Masters work. It also is good for hallucination where you have to come up with a hypothetical project. For example, I had it generate me parts of my project given characteristics or attributes. It came very close and with further and further prompting on the same window the responses got better and better.
As a pentester this new wave of GPT-led development has been both a nightmare and a dream.<p>A nightmare because we have “devs” using it to develop applications laden with vulnerabilities. As an example we found a web app with account enumeration/forced browsing vulnerabilities that would have led to a sizable chunk of our user base being exposed. It would later come out that he had copied and pasted everything from ChatGPT.<p>When I asked him if he trusted everything he found on StackOverflow he said no as if I was an idiot. When I asked him why he trusted ChatGPT he said “because I asked it to check the code for any vulnerabilities and it said there were none”.<p>But, like I said, it’s also been a dream. I’ve seen an uptick in sites with poor practices on bug hunting platforms and I’ve banked about 35k. So that’s nice.
The little experience I had with ChatGPT code was amazing, but you have to keep it under a certain threshold, after that it will simply chop of its code.<p>Asking it to split it into smaller functions so it can be emitted one at a time had mixed results.<p>But the future is indeed bright for such tools.
These LLMs have a huge potential for ML-guided home schooling or small classroom schools.<p>The knowledge resource is no longer centered around the teacher, but rather on the student. The key is to have a teacher that is keen on noticing what interests the students, then converting those interests into meaty projects that students can build.<p>The key is going to have very small learning groups, maybe max 4 per adult. The adult needs to be able to quickly learn new topics with the help of LLMs, and be able to articulate what the students interests are.<p>The value of teachers who can effectively leverage LLM is going to sky rocket.
My first productive use for ChatGPT was writing a filebeat configuration (to extract tokens out of loglines to push into the ELK-stack)... "Given a log line of $MY_LOG_LINE, can you generate a filebeat configuration to get the timestamp, source and destination IPs", and I'm amazed it did it -- well almost, it didn't notice the timestamp in $MY_LOG_LINE wasn't ISO8601, so that was a fun troubleshooting.<p>And I could refine the generated configuration with 2 or 3 other types of loglines... I'm now a convert.
It's great to be amazed by this stuff, but it really just shows how poor human intuition is at these things.<p>> Imagine envisioning Airbnb, and having its whole frontend and backend done within 1 minute.<p>...sigh...<p>You're seeing the tip of an iceberg, and you're like, "wow, it's cold"... but you honestly have no idea.<p>Any tool that you spend <i>less an hour using</i>, you have no idea about.<p>That's it. There's nothing more to say.<p>Use it for longer. Try building larger things. Give me a considered opinion when you've formed one, not a reaction video.<p>There was a spate of this kind of article when chat-gpt first came out, and at the time, it was like: why are we seeing these "I spent 10 seconds with a LLM and I made a potato!" and "I spent 20 minutes with an LLM and made a one-page HTML website!" articles, and none of the "I spent a month with an LLM and build a new programming language", or "I spent two weeks with an LLM and I build a raytracer" articles?<p>Oh they said, "It's too soon, give it time... it's only been out a month. 3 months... 6 months...".<p>You still don't see them.<p>...because they don't exist.<p>No one has done anything impressive with this stuff; it's fundamentally limited in what it can produce, and the massive productivity benefits you see (30x faster!) are for <i>trivial tasks</i>, not <i>difficult tasks</i>.<p>...and the modest productivity benefits you get from using an actual copilot don't make articles that are nearly as interesting to get as many clicks.<p>Look, AI can seem magical, but when you have something that seems too good to be true, after using it <i>for an hour</i>, or even a day, maybe it's not the right moment to drop a blog post about how gosh darn amazing it is?
A lot of trivial plumbing work is really being solved by LLM's.<p>Even in the space of work - We need a lot of plumbing knowledge to get access to a domain e.g. learning about the basics of beekeeping or how to build a nodejs based webapp which makes an API call - a lot of knowledge in a domain is just grind and grunt work which is being automated.<p>We now need to peer beyond and that's where true exploratory work in any domain lies and is useful.
Since it is trained on code from github, internet. Wouldn't it have better examples, better able to provide results, from popular languages. So ask it to something in C/Python/etc... you get better results than for F# or ObjectiveC or ActionScript.<p>When I ask GPT4 questions about Python, it is almost like talking to a pretty knowledgeable person. It offers opinions on best ways and eve no what NOT to do.
What happens when nobody knows how to create their own artwork, write their own novels, program their own code, etc. because they are so reliant on LLMs? Yet LLMs are reliant on using the output from real people. Is there an event horizon where LLMs have nothing truly new to pull from and can only use what was created by previous generations of humans? An LLM stagnation.
I began to think that we should see LLMs just as another kind of programming model to be potentially embedded into existing systems. Nothing more, nothing less.<p>This might sound like a trivial conclusion at first, but as a community we should find some kind of common understanding without playing down its potential usefulness nor riding the hype train some people want us to jump on.
I may be biased, but you should totally integrate ntfy.sh [1] support, so you get a push notification every time a plane passes over your house. I think that'd be a cool use case.<p>Disclaimer: I am the maintainer of ntfy.<p>[1] <a href="https://ntfy.sh" rel="nofollow noreferrer">https://ntfy.sh</a>
I made a little runnable JavaScript script for it if you wanted to play with the opensky API yourself!<p><a href="https://www.val.town/v/stevekrouse.planesAboveMe" rel="nofollow noreferrer">https://www.val.town/v/stevekrouse.planesAboveMe</a>
For me the main thing I've gained from ChatGPT (3.5) is I no longer dread that wall of lack of knowledge. Where you know so little you don't even know where to start and the task seems dreadful and insurmountable. Just a couple questions later you've got an intro, some jargon, some sample code. It makes it much easier to put the pieces together and ask followup questions.
Anyone remember the unlimited number of "build Twitter/blog/todo list in 15 minutes using Ruby on Rails" tutorials?<p>It's great that we've got it down to 2 minutes.<p>However anyone who has worked in a rails shop has experienced that somehow that initial set up seems to make everything else down the line get a bit more time consuming.<p>In fact, I would posit that there might be an inverse correlation between how quickly you can get a blog up using framework X (including an LLM) and the long term scalability of that project as you start adding features and engineers.<p>Legacy rails apps can be terrifying, I can only image the horrors that will be legacy LLM apps if they can even manage the structural cohesion to become <i>legacy</i> apps.
A lot of these comments feel similar to how people reacted to early personal computers. With a kind of superiority around how 'serious' tasks were done on mainframes, and these 'toys' would never amount to anything.<p>I think it's an exciting time where lots of people are getting computers to help them with their problems in a way I haven't really seen since HyperCard.<p>I feel like this is very much in the Hacker spirit.