The reason mainframe persists is it's a pretty slick development and deployment environment. A lot things you might cobble together as dependencies - like maybe a database or a message queue, or observability facilities or even deployment strategies like hot-hot deployments - they're all just built in to the platform. That means they're trivial to consume and they're fully supported by one vendor. It's like the worlds most comprehensive application development framework.<p>Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other. By that i mean it is not only a 19 inch rack like your x64/ARM servers are but also the same power density (32 or 64 amp racks), cooling requirements etc.<p>The most interesting bit is the software, not the hardware. There are cool hardware aspects too - but focus on them and you miss the real reason these things are popular in certain environments.
I never worked on z/OS, but I did work on AS400 (Series I, or whatever it's called now).<p>I think the main things missing is how much IBM really brings to the table here.<p>> If one crashes or has to go down for maintenance, the other partitions are unaffected.<p>Effectively, IBM often does that for you. The machine detects an issue, and calls IBM, who sends someone out (in my day it was immediately), and they fix it. Then it's fixed and they leave and most of your staff has no idea they were there at all.<p>Plus, there's not enough that can be said about using a dedicated stack of hardware and software owned by one company. If there's an issue, it's IBM's. They are the ones who need to fix it. No trying to get HP vs Microsoft to agree to take the blame (which can take literal weeks). Just call IBM, and they take care of it. (In theory)
"They’re designed to process large amounts of critical data while maintaining a 99.999 percent uptime—that’s three seconds of outage per year."<p>Very wrong. Five nines is five minutes and 13 seconds of cumulative downtime in a year[0].<p>Three seconds of downtime in a year is seven nines[1].<p>[0] - <a href="https://uptime.is/five-nines" rel="nofollow noreferrer">https://uptime.is/five-nines</a><p>[1] - <a href="https://uptime.is/99.99999" rel="nofollow noreferrer">https://uptime.is/99.99999</a>
If you zoom out, the ecosystem is very comparable to running on AWS or similar. It's an opinionated environment with proprietary answers for how to do things like scheduling, short-lived tasks, long-lived tasks, storage allocation, networks, monitoring, forced version upgrades, etc.
I believe the real reason for its survivability is the fact that you can pull a tape from the seventies and those binaries will run without any modification.
It's not only that you can easily recompile your COBOL from the '70s, the binary is still compatible. You've never been pushed to migrate to another technology.
Imagine the effort and 'knowledge' included in those evolved programs. The banks don't even know, and are conscious of it, how many laws and regulations they have encoded in there.<p>As someone stated in another comment, the software is the impressive part.
I've never used or even seen a mainframe in 26 years in the tech industry. My brother in law works for a bank and basically the business runs on it.<p>The hardware and software are certainly impressive but does anyone use a mainframe for a new project and not just upgrading or expanding an existing system?<p>I'm in integrated circuit design and we have compute clusters with thousands of CPUs. For some jobs we use a single machine with 128 CPUs and over 2TB RAM. Some steps get split over 30 machines in parallel. All of this EDA / chip design software migrated from Sun and HP workstations in the 1980's and 90's to Linux x86 in the 2000's. I think some of it ran on mainframes in the 70's but does anyone use mainframes for scientific style calculations or is it just financial transaction / database kind of stuff?
"Today’s IBM mainframe CPUs are based on a highly evolved version of the POWER architecture that IBM has been developing for over 30 years. "<p>I've heard a lot of largely clueless people who weren't aware of the differences between the zeries and pseries say something like this, but its generally been entirely false (especially 15-30 years ago which overlaps with some time I myself spent at IBM). Given the rest of the article I wouldn't presume the author is in this category.<p>So has something changed? or is the implication stretching the truth? I mean I guess you could strip a POWER core down so it only runs s390 microcode/etc, but that likely won't actually yield a performant machine and the process of evolving it would likely fundamentally change the microarch of whatever RISCish core they started with.<p>I mean they are entirely different Arches, in the past utilizing entirely different microarches. I can see some sharing of a RTL for maybe a vector unit, or cache structure, or a group doing layout, but that doesn't make the zeries processors any more derivative of POWER than Itanium was derivative of x86, etc.<p>PS: the bit about zos partitions supporting linux seems like a bit of confusion too, earlier its correct about the LPARs being capable of running linux directly, but ZOS isn't providing the lpar functionality, and is generally just another guest alongside linux, ztpf, and various other more esoteric "OSs" that can run natively. There is a unix system services in zos but that isn't a linux kernel/etc.
> <i>Mainframes descended directly from the technology of the first computers in the 1950s. Instead of being streamlined into low-cost desktop or server use, though, they evolved to handle massive data workloads.</i><p>I think the first sentence is 100% correct, but the second one not so much: current desktops and servers (not to mention laptops, tablets, smartphones etc. etc.) evolved from the first microcomputers introduced in the 1970s with the idea of having a computer (albeit initially a not very capable one) that anyone could afford. These then quickly evolved during the 1980s and 1990s to cover most of the applications for which you would have needed a mainframe a few years earlier.
I was involved in an IBM mainframe to RS/6000 migration in the mid 1990s for a major aerospace company. We moved the engineering functionality to a client/server system, including many custom programs and CAD/CAM/CAE.<p>Like the article mentioned, migration was a Herculean task done over a period of years, with a good chance it wouldn't succeed. I worked many 70 hour weeks, I had to drive to the factory at 2AM some days to monitor jobs because only certain people had remote access. Floating point and text encoding was different between the systems, decades of legacy engineering and manufacturing data and code (having cost hundreds of millions of dollars to produce) had to be translated and verified. There was a rivalry with the mainframe group who wanted it to see it fail, reluctant programmers, engineers, and line workers. On and on and on...we had to apply start-up levels of creativity and ingenuity while working in an MegaAeroCorp bureaucratic atmosphere.<p>Ultimately, the project was a success, although the system soon moved to PC-based desktops as graphics cards became more powerful. I got a company award from my internal engineering customers. Shortly after the dust settled, I got called into a secretive meeting with my boss and the department head, expecting a promotion and a raise, or at least an "Attaboy!" No, I was reprimanded by an HR rep with a formal note in my records about my attitude ("You risked everything by cowboying!"). I was admonished for having the gall to take a couple of sick days during the ordeal (my coworkers were concerned when they found me passed out at my workstation, my boss warned me after my time-off "If you are sick <i>be</i> sick" and called me a filthy name.) I was put on double secret probation.<p>Our whole department was eventually outsourced, and I ended my tenure there by being escorted by security to the door and told not to let it hit me in the backside. I guess I should have joined the mainframe group on the business side, in retrospect, they are still around grinding out profit statements. It <i>was</i> a challenge to get rid of a mainframe, maybe more so than eliminating our entire engineering support team.
The only thing not mentioned are the co-processors that handle a lot of ancillary tasks and thus keep the main CPUs free for more work. For instance, printing a report might require just 1 interrupt per page, the rest being handled by the printing controller. The TN3270 terminals talk to a separate communications controller before being passed to the cpu, and the disks used to be able to return results in a sorted order, not sure what they do now...
> Communication is through Kafka events or Java Messaging Services, and new server instances can be spun up in seconds in AWS or Azure clouds to provide additional capacity, which is needed for high-volume processing.<p>I'm honestly surprised that <i>a bank</i> would agree to run anything even remotely serious on someone else's infrastructure.
My partner does very technical systems hacking on z/OS. They have a huge problem hiring new programmers. His colleagues are all well over 65 and are inevitably retiring (or worse). It seems like a real problem for the future of the platform.
It may be my mileage.. But I frequently hear about the shortage of mainframe professionals and, some years ago, I seriously considered getting into it. However, after some attempts to find training on topics I typically see in job postings (CICS, JCL etc), I gave up after finding no courses also providing access to a mainframe "account" for practicing. Maybe I didn't search it properly. I've been advised to try an entry-level position in a company hiring juniors, since they usually provide the training during the first months. But, after over 20 years as an engineer, that felt bigger a step back than what I was willing to take.<p>I still feel interested though. I'm aware of the current state of chaos on mainframe development, but honestly, I don't think that the current web/mobile situation is much better and, besides, I personally hate them really bad.
Early in my career I worked on IBM mainframes and wrote JCL for them.<p>Let's say someone knocked on my door tomorrow and said "We're looking for someone with any experience at all with JCL on IBM mainframes. We'll pay a million dollars a year for a 10-year guaranteed contract. Are you that guy?"<p>I would say "Nope. Sorry. I'm a plumber. I don't know nuthin' about computers."<p>(And then they would say "Ha! GOTCHA! How'd you know we were talking about computers? Now take your million-dollar check and watch your head as you get into this nice black helicopter.")
IBM teaches you to use and program those: <a href="https://www.ibm.com/z/resources/zxplore" rel="nofollow noreferrer">https://www.ibm.com/z/resources/zxplore</a>
The mainframe exists because IBM and the devs respect the time and investments made by its customers.<p>There are efforts made to make sure code doesn't break and migrations are put in place.<p>Sadly only Microsoft and a few others share this attitude, and its likely why these companies and their products will be around forever.
Believe it or not: I never worked on an IBM mainframe, except in college. It always seemed like the prejudices against it meant that, once you get into that world, you can't get out.<p>That said: what the business types like is the belief, whether it's true or not, that when you call IBM for service they're there before you hang up the phone.<p>Also, a long time ago at Oracle, I sat in probably the most boring meeting of my life: a group called MOSES, composed of sysadmins for Unix. Their complaint was that, supposedly, all mainframe sysadmins did things the exact same way, so if you hired a new one, there was no training. Whereas in Unix, everyone did things differently, so a new hire couldn't be productive right away.
I worked on a mainframe for a major airline over 20 years ago. I always found it amusing that Sabre detailed my programs/jobs to cost the airline almost 7 figures a year; I never knew if that was real dollars or price to be negotiated down. Most of my programs was to repackage yesterday's flights and passenger data into CSV files so an external program can pick them up and ETL into a data warehouse. At the time, there was no utility like ODBC. I had to use JCL and SAS and my own created utilities. I even had to write a utility to output to CSV (there was none offered out of the box by IBM or SAS).
I ran Internet banking for a small bank in the early 2000's. We had our IBM mainframe right next to a bunch of HP/UX servers. It was just massive and had a nice red button which they warned all of us to stay clear from. I recall that was around the time running Linux on the mainframe was becoming a thing and we tried to run Websphere and our J2EE app on it. It was not successful (too slow) and we kept running on HP/UX.
Back in the early 80s, I visited the Police National Computer Unit at Hendon in North London. The computer room was the size of a football pitch. Most of the floor was covered with washing-machine-sized disk-pack drives, along with quite a lot of dedicated IO processors (also washing-machine-sized). The walls were lined with storage for offline disk-packs. This system was a triple Burroughs B7800.<p>What distinguished mainframes from minicomputers back then wasn't their awesome processing power, or their resilience; it was their huge IO capacity. They could handle hundreds of storage devices, and hundreds of thousands of terminals. At least, that's how I was told it; I never worked on mainframes.<p>Nowadays a single connection to the internet can connect you to a similar number of "terminals", and storage is also accessed over a network. So the advantages touted by the author are quite different from what they used to be; IO nowadays isn't the issue it was back then.
Here’s a recent-ish Changelog podcast with one of the few professors who teaches about IBM mainframes and COBOL and is part of the Open Mainframe Project, <a href="https://changelog.com/podcast/524" rel="nofollow noreferrer">https://changelog.com/podcast/524</a>
Ah, good to hear that the IBM mainframes are still around, are more <i>powerful</i>, and that the software for the old IBM mainframes will still run on the new mainframes.<p>So, yup, JCL still works! An 3270 series video terminals with 24 lines of 80 characters each driven with CICS (customer information control system or some such) is still used.<p>But I didn't see that<p>(1) Rexx, often a good replacement for JCL,<p>(2) XEDIT, and the PC version KEDIT, my favorite text editor,<p>(3) and VM/370, an impressive virtual machine <i>supervisor</i>, upgraded from the original CP/67 for the IBM 360/67 mainframe<p>are still supported and used. I would be shocked if they were not
still supported and also still used, especially for development and analytical computing tasks.
Absolute fringe opinion(or is it?): The performance and reliability are awesome, but have nothing to do with the real reasons IBM mainframes are still around.<p>Backward compatibility <i>helps</i> - you can still run code for the IBM 1401 on these machines if you need to. When they came up with System 360, they effectively crystalized software development... taking what used to be a forced march to new hardware every few years as you got a new, faster, <i>incompatible</i> machine, and were rewriting the code for the better faster hardware.... and just FROZE that stream of development, which is why Y2K really happened.<p>But even that's not the real reason IBM mainframes have stuck around so long... it's the same reason Virtualization and Containers took off.... they give <i>no default access</i> to files or disks, the administrator has to plan out which disks and other resources a Job, (or Runtime, VM, or container) will access. (Example, the DD statement of JCL[1])<p>This has the effect of being a <i>very course grained</i> capability object system.[2]<p>The default access to everything on a Windows/Unix/Linux system just doesn't work in the modern era. <i>It is because side effects can be strictly controlled</i>, that Mainframes and later VMs and Containers have so much value.<p>-- a small rant:<p>It's 2023... consider the humble AC wall outlet, you can plug almost anything into it, and the circuit breaker or fuse will prevent bad things from happening to your wiring, house, etc.<p>Why can't we protect against a rogue device plugged into a USB port? Why can't we just run any program we please, like we can plug in a lamp we bought at a garage sale, without risking everything?<p>--<p>[1] <a href="https://www.ibm.com/docs/en/zos-basic-skills?topic=concepts-jcl-statements-what-does-dd-statement-do" rel="nofollow noreferrer">https://www.ibm.com/docs/en/zos-basic-skills?topic=concepts-...</a><p>[2] <a href="https://en.wikipedia.org/wiki/Capability-based_security" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Capability-based_security</a>
The problem with most projects of mainframe migration that I've seen is that most of them suffered the bad luck of being started around the 90s and 2000s, a really complicated age in Enterprise Software, the age of COM/DCOM/COM+ and EJB 2.0.<p>Any normal person would prefer to program in S/370 assembly than be forced to use EJB 2.0 Entity Beans.
I remember in my late teens, working as a temp... literally looking up individual accounts from a "past due" report to make sure they were still past due before they received a call from collections dept. Why, because paying a temp was literally less expensive than re-running the report on the mainframe.
One of the reasons Mainframes continue to be used is depreciation. I know a few companies who love their mainframes because they are fully paid for and from a budget perspective they are seen as “free”.<p>This can be very attractive compared to the never ending OpEx for running cloud computing.
The trick to surviving in Enterprise environments is consistently and credibly beating the longest horizon dtrayegic rate of return budget (luck has it enterprise's stategic horizons are not that long). At any point it is still better to stick with the current than to change. You boil the frog, but very carefully.
Listen to this episode "Mainframes are still a big thing" — <a href="https://changelog.com/podcast/524" rel="nofollow noreferrer">https://changelog.com/podcast/524</a>.<p>tldr: This awesome professor has an amazing track record of getting seemingly "average joes" trained up for mainframe programming. Because of how uncool this process is perceived by the average SV developer, it's literally shunted off into the dark recesses of the world, despite it being an awesome jobs platform.