If we look back, we wouldn’t see any singularity in the past and so I believe we will not experience a singularity event in the future either. If any, it will be a point of no-return of the collapsing of a complex system but not a singularity of progress. Why? Because the law of entropy still applies here. The existence of any organized, advanced system is actually against the entropy law. Chaos is the normal state, not organized. Therefore, long term speaking, any system will collapse. On the contrary, continue to build up any highly organized, advanced system is hard. Evolution is blind and so is technical, organizational advancement. That’s why we experience the law of diminishing return in each and every area.
Take AI for example. The popular idea is that once AI is smart enough to design and implement the next generation of itself, it will develop a run-away super intelligence, a singularity. But it’ll not. Why did Deep Mind stop Alpha Zero after a few days of training? Because it was smart enough to defeat the hitherto best chess engine of the planet StockFish? No, because even if they let it run a year longer, there will be no significant progress. After a stable learning equilibrium is reached, to become smarter, the AI need to increase its capacity, its connections and parameters. But to train a larger network, it will need more data, more time and energy. Exponentially, if there is no breakthrough in learning heuristic. As the search space expanded exponentially, there is no way the same old learning heuristic can adequately explore the new space with guaranteed success. It must experiment with different designs, spawning new individuals, accept loss and death. It must evolve! Yes, it might do it faster then the humans did, but there will be no super intelligence over night. The process will probably create a sea of different kinds of intelligence, each is better in certain domains but none can be better in all domains.