I feel this, because it’s like I don’t need to know about something, I just need to know how to know about something. Like, the initial contact with a mystery subject is overcome by knowing how to describe the mystery in a way that AI understands what I don’t understand, and seeks to fill in the understanding.<p>An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.<p>It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.<p>Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.<p>Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..