The scope of unicode is really ill-defined, in my opinion. It want’s to be backward compatible (bell ring, PRN etc), but at the same time support as many writing systems as possible, including historical ones, and invented ones (e.g. emoji).<p>Can we do better?
I think any alternative solution you come up with for whatever facet of localization, internationalization, and human language support you care about ... is going to be worse than: choosing decent Unicode-aware languages and libraries; choosing your own limited range of Unicode support.
I doubt it. Writing systems are not mathematically systematic like computer codes. They're not even internally consistent. The "big problems" with UniCode aren't in its schemas. The "problems" are inherent in the domain Unicode attempts to encode.
Oh, <i>designing</i> something is easy, no worries. (As compared to "getting adoption above 0.0001% before the heat death of the universe": how long did it take before we finally got somewhat workable Unicode adoption, even with the mess of about 5 different Unicode encodings, astral planes and whatnot - 15 years? 20? 25? And all the other bazillion "extended ASCII" encodings are nowhere near extinct, either.)<p>TL;DR: Betamax.