Not to speak for anyone else, but one thing I gently disagree with:<p>><i>Given that Hackintoshers are a particular bunch who don’t take kindly to the Apple-tax[...]</i><p>I have zero issues with an Apple premium or paying a lot for hardware. I think a major generator of interest in hackintoshes has been that there are significant segments of computing that Apple has simply completely (or nearly completely) given up on, including essentially <i>any</i> non-AIO desktop system above the Mini. At one point they had quite competitive PowerMacs and then Mac Pros covering the range of $2k all the way up to $10k+, and while sure there was some premium there was feature coverage, and they got regular yearly updates. They were "boring", but in the best way. There didn't need to be anything exciting about them. The prices did steadily inch upward, but far more critically sometime between 2010 and 2012 somebody at Apple decided the MP had to be exciting or something and created the Mac Cube 2, except this time to force it by eliminating the MP entirely. And it was complete shit, and to zero surprise never got a single update (since they totally fucked the power/thermal envelope, there was nowhere to go) and users completely lost the ability to make up for that. And then that was it, for 6 years. Then they did a kind of sort of ok update, but at a bad point given that Intel was collapsing, and forcing in some of their consumer design in ways that really hurt the value.<p>The hackintosh, particularly virtualized ones in my opinion (running macOS under ESXi deals with a ton of the regular problem spots), has helped fill that hole as frankenstein MP 2010s finally hit their limits. I'm sure Apple Silicon will be great for a range of systems, but it won't help in areas that Apple just organizationally doesn't care about/doesn't have the bandwidth for because that's not a technology problem. So I'm a bit pessimistic/whistful about that particular area, even though it'll be a long time before the axe completely falls on it. It'll be fantastic and it's exciting to see the return of more experimentation in silicon, but at the same time it was a nice dream for a decade or so to be able to freely take advantage of a range of hardware the PC market offered which filled holes Apple couldn't.
This is fascinating:<p>> Retain and release are tiny actions that almost all software, on all Apple platforms, does all the time. ….. The Apple Silicon system architecture is designed to make these operations as fast as possible. It’s not so much that Intel’s x86 architecture is a bad fit for Apple’s software frameworks, as that Apple Silicon is designed to be a bespoke fit for it …. retaining and releasing NSObjects is so common on MacOS (and iOS), that making it 5 times faster on Apple Silicon than on Intel has profound implications on everything from performance to battery life.<p>> Broadly speaking, this is a significant reason why M1 Macs are more efficient with less RAM than Intel Macs. This, in a nutshell, helps explain why iPhones run rings around even flagship Android phones, even though iPhones have significantly less RAM. iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.
I understand the machine is great or going to be great for most use cases. My mbp is my main workhorse, but as a freelance SRE "devops" guy, the Apple ARM platform won't be suitable for my job any time soon, if ever.<p>Docker is not yet available - but even when it would become available, emulating virtualised x86 code is explicitly not going to be supported. That in many cases means pulling a docker image built in a ci/cd pipeline where a dev screwed something up and debugging it locally is no longer an option. If I wasn't freelance, I could probably get away with some cloud instance to run all my docker stuff, but I'm dealing with too many different environments, for clients with various different legal requirements making this simply 'not an option'.<p>Too bad, because the machines look very promising for everything else. Development tools aren't there yet, but I expect that to be fixed pretty quickly.
> A task like editing 8K RAW RED video file that might have taken a $5000 machine before can now be done on a $699 Mac Mini M1 or a fan-less MacBook Air that costs $999<p>That’s insanely great. Maybe I am exaggerating but Apple’s M1 might be the best innovation in the tech industry in the past 5 years.
PSA: however impressive the M1 hardware is, you're still going to be stuck using OSX, playing in Apple's walled garden and being subjected to their awful policies.<p>I'll gladly join the groupie crowd once Linux runs stable on it.
The only downside of the amazing new M1 MBP is that it runs WoW on max settings 60fps. And now I'm back into the world of Azeroth. Especially with the launch of Shadowlands.<p>What the hell, Apple, I thought I was safe and immune from video games with my MacBooks.
I'm typing this from a 2014 i7-4980HQ 15" MBP. This machine would have been replaced in 2017 but I wasn't impressed with that years model. I had planned to upgrade in 2020 but the announcement of the M1 basically quashed that. I've been on this planet long enough to know when Apple changes course like this the old architect is already obsolete. 68k -> PPC -> x86_64 -> ASi. The PPC G5 got exactly 1 OS upgrade (10.5) before it was EOL'd.<p>If the reports are to be believed on performance and Rosetta than this upgrade may be one of the smoothest in Apple history. The Intel CPU has had an incredibly long run, 15 years, at Apple. If they are confident they can make the leap and not leave their users in the lurch more power to them.<p>I'm still on the fence on buying an M1 laptop. Apple users know you pay an Apple tax and a v1 tax. My MBP is getting so long in the tooth I may have to ignore my own advice of not getting first generation Apple hardware.
One of the things I've noticed recently, and especially since the CPU space started finally moving again is how much of a divide there now is between the computer literate, and the computer illiterate.<p>It probably creates a social divide at least that which existed when the majority of people couldn't read or write, and is just as "not OK".<p>Example of this in the first paragraph of this article:
> For everyday users who just want to browse the web, stream some Netflix, maybe edit some documents, computers have been “perfectly fine” for the last decade.<p>These kind of things now read to me like "for the everyday peasant, that just wants to go to swim in the river, seal the roof of their house and get to work on time, clay tablets and stylus's have been perfectly fine for the last century"<p>Even the title screams this kind of thinking, computers are not black magic, any more than medicine or writing were magic or sorcery back when burning witches was a thing.<p>Makes me a little sad.
Those of us who are long enough in UI design know what is a result of attention to detail and professional GUI. We have all used Os X not only for UNIX like core (Darwin) but for consistent UX and UI libraries. In some point in time Apple was influencing our work in really meaningful way by setting the standard (remember Apple Human Interface Guidelines pre Yosemite).
For me personally Soundtrack Pro is most polished professional interface ever made. So in this context UI “innovation” trough emoji and implementation of white space for touch interaction (without touch interaction) is funny but not usable. Performance aside ( which is big accomplishment ) I miss the old approach with balance of contrast and natural flow and will stay on Catalina as long as I can. If Apple changes their stance on telemetry, bypassing things and fixes UI/UX design I have no problem to join again. What is lacking in Linux desktop is consistent approach to UI, but for some of us may be is time to revaluate and relearn things. My personal time investment is in Emacs, with time I have more and more respect for those ideas of freedom and consistency. The selling point for me with Apple was professional interface and high UI standards, sadly they are gone. But hey everyone of us is different and this is good, right?
I don't quite understand how 'retain' and 'release' can be more <i>memory</i> efficient on Apple Silicon than x86.... I can understand how they can be more efficient from a performance standpoint in terms of more efficient reference counting, but I don't understand how that translates to less memory usage which is apparently what's being argued... ?<p>Unless on x86 some of the 'free's when the ref counts hit 0 were being batched up and deferred, and that doesn't need to happen now?
I didn't really understand the TSO explanation given in this article and found it to be a bit hand-wavy. The article says to emulate the x86 TSO consistency model on an ARM machine which is weakly ordered you have to add a bunch of instructions which would make the emulation slow. I followed that much but then after that it doesn't really explain how they would get around these extra instructions needed to guarantee the ordering. It just says "oh, it's a hardware toggle"; toggle of what exactly?<p>I could see them just saying no to following TSO for single core stuff and when running emulated code for single core performance benchmarks since technically you don't care about ordering for single core operation/correctness. That would speed up their single core stuff but then what about the multi-core.
For those of us who are picking up M1 MacBooks as first time Mac users, is there some kind of Mac crash course for devs? What apps are useful, what familiar tools from Linux can we use, etc? I'm aware of Dash, which makes me suspect there are a bunch of other Mac-exclusive tools which will be useful.
Technology seems to swing like pendulum between running remote and running locally as technology evolves. Recently I purchased an RTX 3090, and between my Ryzen with 24 threads, and the 64 GB of memory I bought for a few hundred dollars it was really occuring to me how much power my PC has for really not that much money. I don't need to be spending so much cash on cloud services when my local machine has more than enough horses to do everything I need.<p>I think the M1 is one more force towards the pendulum swinging back. I suspect that as developers port applications to ARM people will rediscover the benefits of native installations as new software starts to take full advantage of this new hardware.
weren't the old exploits on the intel processors patched by essentially making many operations slower? like they had to disable some cpu instructions which were previously innovative and this debilitated the irreparable processors.<p>so a new architecture with better versions of the same instructions would feel very fast, since we went two steps back first.
I’m starting to wonder if this is the reason we had some serious problems with macOS and iOS in recent months and years. Serious bugs and serious security flaws.<p>The A-team was working on getting everything ready for M1. The B-team was working on the usual releases.<p>If the M1 is as good as everyone says, then that means they had the best people on it.
I am weirdly obsessed with this. I am burning to see the next iteration of Apple Silicon devices with an M1X or M2 chip and more ports and RAM for higher-spec devices and new MacBook designs even lighter and smaller than the current Air (bring back the 11"!).<p>I guess I'm just caught up in the hype and excited about movement in the chip design space after many years of Intel stagnation. Zen and the M1 are a breath of fresh air.<p>Waiting to replace a 2014 11" MacBook Air until the story about Parallels support is clear and maybe new MacBook designs are available.<p>I also have a 2013 Mac Pro trashcan-style on my desk at work. Until recently it was simultaneously available in the inventory system and marked as EOL. I'm not sure if the 2020 6-core Intel Mac Mini would actually be faster - maybe. I'm only a part-time iOS developer so I keep trucking along with the 2013 model.
I know very little, so perhaps someone could enlighten me. But I am curious how Apple Silicon will be for machine learning.<p>When Apple releases a MacBook Pro with 64GB of unified memory (assuming they will) — won’t that be amazing for machine learning? I am under the impression that GPU memory is a huge factor in performance. Also, is there any way that the neural engine can accelerate training — or is it just for executing trained models faster?
I'm curious about the Neural Engine cores. What software use this? Why would I want to buy a Neural Network coprocessor in my machine, instead of using that money for a better cpu/ram/ssd?
I'm amused by the praise.<p>I've been a Linux user for the last 10 years. Last week I got an M1 Mac Mini to do some iOS development on.<p>It feels fast, but not substantially faster than Linux. Safari is a bit snappier than Firefox, but that's about it.<p>Was macOS on Intel really that much slower?
I wonder how fast M1 feels compared to, say, KDE on a high-end Intel machine. Is it really <i>that much faster</i> than anything, or just faster than what people are used to with OS/X and Windows.
I do take slight issue with the section about RAM performance. The idea in the article is that M1 Mac runs software that uses reference-counting instead of garbage collection as its memory model (i.e. Objective-C and Swift software). Two issues…<p>1/ Sure, but that's also the case with with x86 Intel Macs. Still running reference-counting Obj-C and Swift software for the most part. So how is this a M1 differentiator?<p>2/ Also, Macs run plenty of software that mostly uses garbage collection, e.g. any Electron app (Spotify, Slack, Superhuman, etc.) is mostly implemented in GC'd Javascript. There's also plenty of software written with other runtimes like Java or implementing a GC.<p>So this does nothing to explain why 8GB of RAM on an OS X device with an M1 chip is better than 8GB of RAM on an OS X device with an x86 chip from Intel.
Imo release of M1 will be comparable to the release of the original iPhone, but for the Mac brand (in terms of profitability and long term returns). Just saying.
It's quite clear how this "magic" is possible.
Weather you like it or not, the future is a "product on a chip". That includes everything: CPU, GPU, RAM, SSD, and assembly instructions for vendor-specific things like NSObject. This puts everything physically close together (efficient), eliminates all the protocol compatibility overhead (efficient), removes all the standards the company can't control (efficient).
The downside, of course, that this will be the ultimate vendor lock-in, which is hard to compete with, and can't be serviced by anyone else.
The upside is that the alternatives will always remain relevant.
I bought a pretty heavily specced (not top of the line, but close) 16" MBP in April. Kinda kicking myself... and aggressively insisting to myself that the screen real estate matters a lot to me. Damnit.
I’m curious if/how Apple Silicon will compete in the server market.<p>Apple certainly isn’t known for producing cost effective servers, but if they really posses this technology that leapfrogs commodity hardware they’d be crazy not to use it in every market possible, right?
> looks at 2010 MBP that's working, but starting to have firefox issues.<p>... This may be the actual upgrade moment. I just hope time machine is compatible. likewise old (CS5) versions of Photoshop since I'm not into this subscription BS.
I've seen lots of M1 benchmarks, but has anyone done a side by side comparison of what it is like to actually get work done on one?<p>Take a conventional dockerized local dev environment and just start building stuff. How much time do you spend working around M1 arch issues versus building your app?<p>This is the key factor that is keeping me from being an early adopter. I don't get paid to figure out how to work on a new chip architecture, in fact I pay a lot of money to not have to think about those problems at all.
I just realized thanks to this article, that the x86 memory-model emulation is not only available in the M1, but has been present going back to at least the A12 series. Apple has been planning this a long time.
I have an iPad Pro with keyboard and a 15-inch MacBook Pro I got when my previous MacBook Air couldn’t handle the video editing I was dabbling in.<p>I desperately want one machine (and probably in the iPad form factor) but I don’t know if Apple is ever going to get me there.<p>What say you HN? Is there a future where I can have a single machine? Any other suggestions for what I can do about it today? I’ve test driven Surface computers in various flavors from friends of mine and I really can’t get down with Windows. Am I doomed to carry two machines with me all the time?
Can't wait to buy the M1 and feel the speed upgrade.<p>Fast-forward 2 years later browsing the web is slow as hell, battery runs out in an hour, can't even CMD-TAB without lagging.
I'm curious that the A12Z DTK Mac mini shows moderate Geekbench score under Rosetta 2. Is this means some improvements are not M1-only, but already in Ax processor?
I guess the good thing for those of us who can't afford a new mac right now is that the used market will be flooded with recent models at a cheap price LOL.
Call me dumb, but I didn’t realize they had done so much memory optimization to make the physical 8GB of RAM so effective. I saw a very much lower number than I expected and just assumed it wouldn’t handle memory intensive workloads well. As someone who develops web tech my entire life revolves around crushing RAM, now I think the M1 may actually result in big gains for my workload hrmm
Damn, I knew these are fast (from reviews, from pros) , but I was happy with my mid 2012 MacBook pro ,and all its modularity, even if it doesn't open 100 tabs or open 7 simultaneous , or lasts for 3-4 hours max without charging.<p>but Man, these "user" reviews are making me drive towards a purchase and its going to punch a hole in my wallet!<p>looks apple has reinvented the chips / processors
I might have an explanation for the reduced RAM consumption thanks to the M1 chip: RAM compression (<a href="https://www.lifewire.com/understanding-compressed-memory-os-x-2260327" rel="nofollow">https://www.lifewire.com/understanding-compressed-memory-os-...</a>).<p>I'm using zram on manjaro and I see it as a trade-off between RAM and CPU power.
Intel and AMD were really limited in how much they could innovate since they limited themselves to Windows desktop.<p>And windows basically only runs on the amd64 architecture.<p>Hopefully, they'll start being bolder now. The MS ecosystem has been an anchor for hardware (especially desktop/laptops) innovation for far too long.
This is not only Apple. All modern mobile ARM processors, ones that are used in Androids, too, they all are far ahead of Intel in TDP to performance ratio, almost by order of magnitude. Just make bigger ARM chips with more high-perf cores, and they will destroy Intel.
I am thinking about getting thing mostly to ssh into a linux server. I would like to run emacs on the server and have its display bounced back via X to the Mac. Is this practical? I tried Quartz on my wife's Mac but the fonts looked like crap.
What does this mean for the future of CPU design?<p>VLIW and other advanced designs never went mainstream in part because the AOT on install/JIT everything future never arrived. But with the success of Rosetta 2, has that future finally arrived?
I upgraded from a 32gb x86 machine to the 8gb macbook pro and it's downright amazing and I saved a bunch of money in the process.<p>Apple is going to make an absolute killing on people upgrading RAM unnecessarily.
DHH tweet quoted in the article: <i>You don't sit around thinking "oh, browsing the web is slow on my decked-out iMac", but then you browse with the M1, and you're like, DAMN, I can actually feel that +50%.</i><p>I wish I could feel excited about this, but my first thought is that web developers probably have a huge wish list of things they're ready to unleash to eat up that 50%. I was expecting to get a nice 5-7 year lifetime from my early 2020 Macbook Pro. Maybe I should revise my expectations, especially if more and more desktop apps are going to be built on web technology.
I just got one. I’m blown away by the speed as well. Chrome runs insanely fast! Alas, it’s not developer ready yet. Brew is a mess. Docker doesn’t work. PyCharm is WIP although can use x86 version. I was skeptical of the hype but this little laptop has made me realize how slow everything else is.<p>Unfortunately, while the hardware has accelerated far beyond expectations, the software - specifically MacOS BigSur is a major step backward. So many fucking animations. Everything feels fluid like operating in molasses. The UI changes seem to be shoe horned into a desktop that doesn’t need giant white space for fat fingers. Menu bars are twice as tall taking up precious space. Top bar was already crammed with a lot of icons. Now, they’ve made them sparsely spaced by adding padding between the icons. Everything is baby-like with rounded corners and without borders. Segmentation UI elements are no more. I want to ask Apple’s UI team: WHY!? What is currently wrong with macOS Catalina UI? Until you can satisfactorily answer that, there shouldn’t be any change. Stop changing the UI like you’re working at Hermès. It’s not fashion. If the reason is to unify everything, all screen sizes, then you’re sacrificing all three. Perhaps making it easy to develop apps for all 3 platforms is a plus, but as a <i>user</i>, this all feels like a regression. I’ve lost hope in modern UI engineering. It’s not engineering anymore.<p>I want macOS that has a UI of Windows 95. That would be totally insane on Apple Silicon.
At the risk of being that (Linux) guy --<p>What is gained here if we're just still applying faster cycles to Apple-esque wasteful (and perhaps harmful, as we're apparently learning re: their telemetry) software?<p>If people really dig their Apple stuff, great. But I think its worth thinking about the likelihood that a "slower" computer running Linux could probably serve the actual user better in terms of "getting stuff done." Moreover, I think we're pretty close to "beauty" parity here as well. Apple's advantage now is probably <i>mostly</i> the networked devices, i.e. unity between phone and pc messages, etc.
I see no mention of simd in these threads. The only thing that has made high data throughput possible is vector operations. What's the performance of libjpegturbo or hevc in software?
I'm not a huge fan of the thermal throttling comparisons.<p>Apple completely fucked the pooch on the previous gen(s?) when it came to design.<p>They don't get points for fixing an utter fuck up. That should have triggered a recall, imo.<p>However, the performance of M1 looks hella solid, and kudos to them. I'm gonna stay with Linux because I'm comfortable in it, but innovation is never a bad thing.<p>INB4 walled gardens and code signing: stop drinking the koolaid and do your own research
> iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance.<p>Oh no crap Sherlock! Let me save this quote whenever someone wants to tell me all about how GC is much superior than reference counting.<p>Yes tell me how a stop-scan-mark-sweep periodic process is more efficient than just <i>keeping track of what you do</i>.
Tin foil hat warning - but how much of the M1 performance improvement is from optimizations made in Big Sur for Apple Silicon that they just didn't bother implementing for x86 since it's now the outgoing technology for Apple?<p>I realize the reducing in power consumed for any given quantity of work is downright amazing for laptops, but I guess I'm more curious about workstation and (build) server kinds of applications.<p>Also, how many of these benchmarks are x86 versus Apple Silicon where both are running Big Sur. I've been seeing so much "Xcode on Catalina" versus "Xcode on Apple Silicon on Big Sur"
"iOS software uses reference counting for memory management, running on silicon optimized to make reference counting as efficient as possible; Android software uses garbage collection for memory management, a technique that requires more RAM to achieve equivalent performance."<p>This statement is nonsense. Reference counting is typically used in garbage collection. <a href="https://en.m.wikipedia.org/wiki/Reference_counting" rel="nofollow">https://en.m.wikipedia.org/wiki/Reference_counting</a><p>It is equivalent of saying "On iOS devices the memory efficient Chrome app can be used, but on Android phones a browser is used, which requires more RAM for equivalent performance."
I got tired of such reviews filled with so much bullshit..<p>These people forget about Pro users and still sees a laptop as an iPad with keyboard.<p>Pro users need the machine to -heavy- work, and rely on compatibility (software and hardware). Also, I work 90% of the time on my desk with the power adapter connected. So battery is far below in my priority list.<p>So far, MKHB was the only decent review I found. I suggest serious users to watch it.
These ppl are so drunk on the kool aid they don’t even realize that 80% of their experience is common to every new laptop purchase. In 1 year these “revolutionary” computers will be boring and slow again, especially when their batteries start to deteriorate.
Such an impressive marketing launch. Regular marketing, paid reviews and a bunch of astroturfing. A standard Apple launch receipt.<p>I will withhold my enthusiasm or judgment until it has been I'm the hands of real users for six months or so.<p>In principle, I have never seen a Mac product that lives up to the hype. I might be harder to please
I guess<p>There are bound to be issues with Rosetta2 that will be found and hopefully patched. Same with the new OS.<p>I predict some problems with hardware as well.<p>To me this is common sense.<p>This is a gigantic step. A gigantic release.<p>Such events nearly always have issues.<p>Which is why so many software developed work With continuous delivery.<p>Though we might think this is how the development Took place internally.<p>I do not know how many dev machines were handed out prior to the launch<p>Like anyone I would like all the glitter and fanfare to be real and accurate.
.