New technology taking my job is the least of my concerns here. In my opinion there is simply a bifurcation of software underway: traditional programming on one side, and training/learning-based technologies on the other.<p>Traditional software will continue to be chosen when we want predictable, unbiased, mechanical execution of instructions. There are many areas where this is preferred, and I don't see that changing. Mechanical and later silicon calculation devices are invaluable for their speed, but the greatest benefit is that they are predictable and consistent: they do not make errors unless the design is in error.<p>AI, machine learning, and other training/learning-based technologies also have many useful and tantalizing applications. For applications such as those that enhance productivity, provide entertainment (e.g. art and music), or autonomously perform tasks where mistakes can be tolerated, these training/learning-based technologies will reap great things.<p>However, for many applications we don't want a complex device, whose behavior, while it can be ostensibly tested, cannot be completely understood and examined to be provably correct. Or, whose faulty action cannot be definitively reproduced and root-caused after a mishap. Or, whose 'black-box' can be infected or influenced by bad actors in a manner that is undetectable.<p>I don't ever want to see a radiation dosing machine that is clever, an industrial control process that is expected to be trained to infer its own decisions where injury or life is at stake, nor do I wish to argue with a machine to open my pod bay door.<p>Alternatively, perhaps legal precedent will just establish the degree to which machines are allowed to make mistakes, and if they make fewer than a human, we will just accept the cost/benefit of injury, loss of life, or evil as 'practical', and move on. 'Actuary Shrugged'?<p>The most ominous prospect is if humanity fails to evolve past war and conflict faster than this technology's destructive capability. Maybe Fermi will get his answer.