Among other things, the author presents the argument that a more efficient meritocracy will exacerbate the fact that we don't all have the same genetic endowments, and these endowments are subject to hereditary capture to a similar degree that wealth is. Furthermore, future technologies will allow the wealthy to simply buy permanent genetic advantages for their descendants.<p>Just as a universal basic income is presented as one antidote to wealth inequality, the idea of universal genetic enhancement is presented, specifically of intelligence (whatever that is). If we assume it's a fait accompli that members of our elites will pursue genetic enhancement of intelligence for their children or themselves, what are the strongest consequentialist objections to the idea of free, universally-provided genetic enhancement, assuming such therapies are actually effective, practical, and safe?<p>One obvious objection is of a "Brave New World" variety: we have yet no idea how systematic selection to increase "g" (or any trait, for that matter), could stunt or enhance other traits, deplete valuable kinds of cognitive diversity we can't yet measure, or twist our values in some immeasurable and negative way.<p>Worse still, it's easy to imagine government scientists in more authoritarian societies stumbling on allele combinations that enhanced political docility, consumption-oriented behavior, thriftiness, and so on, and selecting for those in the next generation to solve demographic, economic, or political problems.<p>On the flip side of that fear is the hope that we could select for propensities that help us solve the daunting list of global co-ordination problems that now face us, climate change and dangerous AI being the two most generic ones. The consequences of failure there are so dire that we may even have reason to see such enhancement as necessary -- the equivalent of a species-level adrenaline shot to get us through an existential crisis.<p>And what if we could make ourselves less dishonest, manipulative, cynical, and tribalistic? What if we could design our values to be different from what they are, to be what we wished they were? That's much scarier for me, for reasons that are harder to explain. And it mirrors a bit the problems of building an self-enhancing AI that doesn't "diverge to evil".<p>I'm sure there's a rich seam of blogosphere material out there on these topics, maybe even some academic papers, would be very interested if someone is willing to share some links to specific arguments or discussions.