I guess I'm not panicked about my job in the face of AI because <i>objective correctness</i> is required. I <i>dream</i> about the day that OpenAI can write the 100 lines of code that connect the BLE stack, the ADC sensor and the power management code so that my IoT sensor doesn't crash once every 8 days.<p>I see the AI stuff as <i>very</i> different from, say, the microcomputer revolution. People had <i>LOTS</i> of things they wanted to use computers for, but the computers were simply too expensive.<p>As soon as microprocessors arrived, people had <i>LOTS</i> of things they were already waiting to apply them to. Factory automation was <i>screaming</i> for computers. Payroll management was <i>screaming</i> for computers.<p>I don't see that with the current AI stuff. What thing was waiting for NLP/OpenAI to get good enough?<p>Yes, things like computer games opened up whole new vistas, and maybe AI will do that, but that's a 20 year later thing. What stuff was screaming for AI right now? Maybe transcription?<p>When I see the search bar on any of my favorite forums suddenly become useful, I'll believe that OpenAI stuff actually works.<p>Finally, the real problem is that OpenAI needs to cough up what I want but then it needs to cough up the <i>original references</i> to what I want. I normally don't make other humans do that. If I'm asking someone for advice, I've already ascertained that I can trust them and I'm probably going to accept their answers. If it's random conversation and interesting or unusual, I'll mark it, but I'm not going to incorporate it until I verify.<p>Although, given the current political environment, pehaps I <i>should</i> ask other humans to give me more references.