TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Do Big Irons Still Exist?

44 pointsby diegoloover 9 years ago

16 comments

ChuckMcMover 9 years ago
The general purpose answer to that question is OLTP. The transaction processing community has a number of benchmarks which look at the cost per transaction and large mainframes typically &quot;win&quot; in those scenarios. As for <i>why</i> they win, that is an interesting question.<p>As a systems enthusiast and someone who has watched as computers got small and then big and then small and then big again, I believe the fundamental answer is based in state machine theory. Specifically around how data becomes &quot;entangled&quot; with other data. That is the essence of what makes transactions hard.<p>I first ran into this looking at scaling file systems. Unlike RAID where all of the blocks in a stripe are related mathematically, a &quot;file&quot; as a sequence of octets is defined not only by the mutations that happen to it, but the order in which those mutations take place. So &quot;append 1, 2, 3&quot;, back up one, append 4, 5&quot; leaves 1, 2, 4, 5 if applied in sequence but leaves 1, 2, 3, 4 if the last two steps are swapped. Thus both operations and the order of the operations are important. To hold the state of a complex sequence stable, you generally have to have it all in memory ready to complete (commit) and then rapidly verify its stable, and then commit it.<p>Clusters of smaller systems have a hard problem with this. That said, I would love to play with some of Google&#x27;s spanner systems to see how well they handle the OLTP workload with respect to cost&#x2F;size&#x2F;power. The paper suggests that there is a credible path there as flocks of distributed systems get cheaper and more easily connected.
评论 #10517502 未加载
评论 #10517603 未加载
nickpsecurityover 9 years ago
Here&#x27;s a nice summary of their advantages:<p><a href="http:&#x2F;&#x2F;ezinearticles.com&#x2F;?Advantages-and-Disadvantages-of-Mainframe-Computing&amp;id=7413087" rel="nofollow">http:&#x2F;&#x2F;ezinearticles.com&#x2F;?Advantages-and-Disadvantages-of-Ma...</a><p>Hard to toss out a trouble- and hacker-free system that has handled everything thrown at it for 30+ years, can run any workload, maintains backward compatibility, and supports new stuff. Channel I&#x2F;O is also frigging awesome:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;I&#x2F;O_channel" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;I&#x2F;O_channel</a><p>Note: I wrote in Ganssle&#x27;s Embedded Muse that real-time could benefit from a beefy CPU and low-end one for I&#x2F;O interrupts. He agreed and one of us found a SOC that did something like that with quite some results. :)<p>Virtualization, pay for what you use, hardware accelerators, I&#x2F;O offloading... all these &quot;new&quot; things have been in mainframes since the 70&#x27;s. Unlike the modern stuff, the people coding them focus on making them boring, predictable, and reliable. Plus your old software is future-proof and you can do new stuff. So, risk-adverse businesses think they&#x27;re worth the HUGE amount they spend on them.<p>That said, there is a negative reason many companies stay: lock-in. The older companies invested decades worth of money in mainframe-specific software and libraries&#x2F;tools from companies that no longer exist. Porting all that over to modern architectures would cost way more than a mainframe plus have risk of catastrophic failure. So, they just pay the bill each year and accept any improvements they get.
评论 #10517324 未加载
评论 #10518059 未加载
fiatmoneyover 9 years ago
Big Iron exists to do tons of high-reliability (as in, &quot;fire some buckshot at the server, swap some parts, zero downtime&quot;) transaction processing.<p>Little known fact: the original high-throughput NoSQL document database was written by IBM and is still around.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;IBM_Information_Management_System" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;IBM_Information_Management_Sys...</a>
评论 #10516975 未加载
bmh100over 9 years ago
I work with a platform that fits exactly into the big iron use case. A typical machine will host millions to billions of rows of transactional data in memory. The round time from a user interaction to fully joined (across several or dozens of tables), aggregated (across several dimensions), and visualized data rendered by the client browser is expected to be less than 2 seconds. Each second of wait is an exponential cost to user experience. Yes, you have a single machine with dozens to hundreds of GB of RAM. You also have a responsive analytics experience, and that value makes it all worthwhile.
评论 #10516908 未加载
评论 #10517298 未加载
评论 #10516873 未加载
spo81rtyover 9 years ago
Some database heavy applications were not designed with sharding&#x2F;scaling out in mind. Rewriting an entire application to do so is not a trivial task and takes away the focus of an entire company and development team during the process of such a major endeavor.<p>At my last company we had this problem. We had a simple application for tracking inventory that morphed in to a CRM with tons of data from emails, notes, etc. Database grew to be terabytes in size. Using fusion-io cards and replicating to read only databases was required to keep us going and it worked. We had to keep buying bigger iron. A database server with 2TB of RAM!<p>At my new company, Stackify (<a href="http:&#x2F;&#x2F;www.stackify.com" rel="nofollow">http:&#x2F;&#x2F;www.stackify.com</a>), I made sure we gave each client their own database so we knew we could scale out from the beginning. Now we manage over 1000 SQL databases and that has its own set of challenges. We use SQL Azure so that makes it all pretty simple thanks to their new elastic pool features. Versioning SQL schemas and using tools from red gate have helped us keep our sanity.
Theodoresover 9 years ago
A linguistic aside...<p>I always thought it was &#x27;big iron&#x27; as in something big and made of &#x27;iron&#x27;. The idea of &#x27;Big Irons&#x27; makes me think of ironing shirts with some super-sized, barely lift-able iron rather than something the size of my Philips iron. I imagine the steam from a &#x27;big iron&#x27; could be quite fearsome.<p>Having made the link I now have a sensible name for the server room where I work. We didn&#x27;t put a sign on that door because it would be helpful for thieves. &#x27;Ironing Room&#x27; as in what hotels have might be more befitting even though a server and a firewall does not make &#x27;big iron&#x27;.<p>Either way &#x27;big irons&#x27; is now added to my lexicon to go along with other deliberate misspellings including &#x27;nucular&#x27; and &#x27;skelington&#x27;.
评论 #10517077 未加载
评论 #10518001 未加载
评论 #10518113 未加载
评论 #10517577 未加载
评论 #10517025 未加载
knappadorover 9 years ago
One reason big machines might make a comeback is the increasing capability putting off the super-linear cost growth into the realm of &gt; 100GB in-memory or &gt; 10TB on disk. CPU hasn&#x27;t kept pace unless you consider GPGPU or Phi parts.<p>Super-linear disk cost back when disks were already atrociously slow compared to the rest of the machine have largely gone away with SSD&#x27;s hitting huge capacities and tech like NVMe, solid state RAM modules, and Intel&#x27;s upcoming Optane tech ensuring that more than ever, scaling horizontally can be put off way more than used to be possible.<p>If you look at scale-out vs scale-up for any applications that were disk limited, disk performance is now ridiculous - &gt; 1GB&#x2F;s and IOP&#x27;s measured in 100&#x27;s of thousands. I&#x27;m expecting a bit of a comeback for HA over HP. More than likely, your app can be served well by a single big machine that is well within the linear scaling regime, and you need several for durability and geo-availability.
评论 #10517097 未加载
jetsnocover 9 years ago
A single &quot;big iron&quot; server is mostly simple and very understood whereas architecting distributed systems gets very complex very fast. It&#x27;s easy to have a very large MySQL server and a MySQL slave whereas sharding gets complicated quickly and there is always network latency involved.
n00b101over 9 years ago
The dictionary definition cited in the article confirms that the term &quot;Big Iron&quot; usually refers to High Performance Computing (&quot;supercomputers&quot; as opposed to database servers): &quot;Used generally of number-crunching supercomputers such as Crays&quot;<p>The question is still interesting, why does Big Iron still exist in High Performance Computing? I&#x27;m not completely up to speed, but I think the reason has a lot to do with specialized network interconnects, such as three-dimensional toroidal interconnects [1] ... These specialized interconnects differentiate &quot;Big Iron&quot; from commodity clusters. Another differentiating feature relates to memory, such as very large memory capacities and unique memory hierarchies using NVM, SSDs, etc. A third possibility is very large CPU socket capacities, going beyond the standard dual-socket or quad-socket configurations. This type of technology can certainly play a role in databases and it sheds some light on the &quot;database appliance&quot; trend (integrated hardware&#x2F;software solutions).<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Torus_interconnect" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Torus_interconnect</a>
trhwayover 9 years ago
corporate developers (and their tools) can work with a DB on Big Iron - it is basically an appliance to them with SQL interface. The distributed&#x2F;horizontally scaled systems are far from being an appliance and require Google, FB, etc.. type of employees to work with it, and there aren&#x27;t that many of those employees around. This is why companies who make the horizontally scaled systems accessible to typical corp are going that well.
评论 #10516996 未加载
derefrover 9 years ago
Unrelated question I&#x27;ve been wondering about for a while now: I read a lot of &quot;Unix-ist&quot; writings growing up, and one constant target of hate was VAX&#x2F;VMS—comparing it, usually, to something like an overwrought 747 cockpit where everything is automated and shiny and nobody can really understand what&#x27;s going on underneath.<p>Given that even Linux is now used in HPC scenarios, are the &quot;true mainframe&quot; OSes really still so different, either in architecture or in average sysadmin experience? Or did the microcomputer OSes effectively converge to have all the same features?
wilykover 9 years ago
At first, there was the mainframe and all was good.<p>Then Sun pushed out SPARC boxes with 3 mouse buttons (don&#x27;t even LOOK at those let alone touch those sayeth the sysadmin)<p>Then we moved everything server side to the cloud, and virtualize everything in our dev environments.<p>In the future, I predict that things will come around full circle: quantum computing will bring us back to the Big Iron days of &quot;don&#x27;t even look at it, don&#x27;t even touch it&quot; but given cooling requirements of QCs these days you won&#x27;t ever see it.<p>&quot;All of this has happened before, and it will all happen again.&quot;
评论 #10517375 未加载
评论 #10517299 未加载
dbhacker23over 9 years ago
I want to put by DB in ram. I just need to keep it under 64TB. <a href="https:&#x2F;&#x2F;www.sgi.com&#x2F;products&#x2F;servers&#x2F;uv&#x2F;uv_300_30ex.html" rel="nofollow">https:&#x2F;&#x2F;www.sgi.com&#x2F;products&#x2F;servers&#x2F;uv&#x2F;uv_300_30ex.html</a>
jopythonover 9 years ago
&quot;also Microsoft SQL Server on top of Solaris on Sparc&quot;<p>MS SQL Server DOES NOT run on Solaris on SPARC. The author should have done his&#x2F;her research before making that claim.
ljw1001over 9 years ago
To press big shirts?
gaiusover 9 years ago
A &quot;cloud&quot; <i>is</i> a mainframe to all intents and purposes.