TPUs are only one part of this eye-opening presentation. Skip to page 28, where Jeff starts talking about:<p>* Using reinforcement learning so the computer can figure out how to parallelize code and models on its own. In experiments, the machine beats human-designed parallelization.<p>* Replacing B-tree indices, hash maps, and Bloom filters with <i>data-driven indices</i> learned by deep learning models. In experiments, the learned indices outperform the usual stalwarts by a large margin in both computing cost and performance, and are auto-tuning.<p>* Using reinforcement learning to manage datacenter power. Machine intelligence outperforms human-designed energy-management policies.<p>* Using machine intelligence to replace user-tunable performance options in all software systems, eliminating the need to tweak them with command line parameters like --num-threads=16, --max-memory-use=104876, etc. Machine intelligence outperforms hand-tuning.<p>* Using machine intelligence for all tasks currently managed with heuristics. For example, in compilers: instruction scheduling, register allocation, loop
nest parallelization strategies, etc.; in networking: TCP window size decisions, backoff for retransmits, data compression, etc.; in operating systems: process scheduling, buffer cache insertion/replacement, file system prefetching, etc.; in job scheduling systems: which tasks/VMs to co-locate on same machine, which tasks to pre-empt, etc.; in ASIC design: physical circuit layout, test case selection, etc. Machine intelligence outperforms human heuristics.<p>IN SHORT: machine intelligence (today, that means deep learning and reinforcement learning) is going to penetrate and ultimately control EVERY layer of the software stack, replacing human engineering with auto-tuning, self-improving, better-performing code.<p>Eye-opening.