I feel like this is a super surface-level analysis that's based on just the "epiphenomena" of the two paradigms instead of the actual forces driving the distinction (i.e. control).<p>>I sympathize greatly with this view. For the past five years, I have exclusively run Arch Linux. I love the early-2000s style of personal computing: text-heavy interfaces, words rather than icons, uniform keyboard shortcuts everywhere.<p>>[...] but I sit on a clacky IBM keyboard to write code and blog posts in a terminal that hasn’t changed in twenty years.<p>This is just such a misguided way to frame the difference - inspectability and possesing control of your device does not in any way imply having to use outdated technology, which is what this is saying if I were to take it literally. Text-heavy interfaces do not in any way intrinsically attach to the idea of the "extension of self", no more than ThinkPads enable UNIX wizardry. Conversely, a laptop that has a touchbar or an OS with a voice assistant does not imply lack of control, or "becoming a part of self".<p>Come to it, Windows XP were in no way any more customizable or inspectable than the current crop of Operating Systems, it was already a full, pre-packaged black box - and the distinction between Android and iPhone is almost entirely superficial. Sure, you can switch your launcher, and <i>some</i> devices are easier to "root" (i.e. circumvent black-box measurements), but the differences end there, when I hold my Android phone I am absolutely holding a magic wand, just of a different grain.<p>I do think that there are two opposing trends, but I struggle to see the distinction as an "extension of self" / "part of self" approach - unless, of course, the whole idea is that the latter become a "part of you" because they offer you so little control you have no choice but mold yourself around them - in which case I fail to see how that constitutes a separate paradigm fulfilling a real desire.<p>I think that the reason that this distinction appears is that the "part of self" devices are generally more successful because of a mutually reinforcing tendency of the platforms to be locked-down as it benefits the vendors, and the fact that these devices often have superior UX due to generally receiving more developer attention as a result of being profitable. This results in well-polished "magic-wand" devices that are user-hostile - and the conflation of the two. Android runs on a myriad of different devices due to not being as locked down, but as a result, the average Android phone receives a thousandth of care as a new iPhone.<p>Neither is actually a consequence of user desires except on the broadest level, hence why I do not think that the split is as deep as the author claims. The level of control a user seeks from their computing devices is a spectrum, so splitting it into two opposing approaches feels somewhat arbitrary and ultimately unproductive to me, as it sort of solidifies the misconception that a capable, general-purpose device is somehow impossible and that having control over a device requires you to revert decades back in progress.