DevOps isn't amount making Developers be Ops guys. It's about the fact that automation eats everything, and a significant part of 'ops' is now coding.<p>A DevOps person isn't someone who develops, and who does Ops. It's someone who does only Ops, but through Development.<p>It's not about start ups vs Enterprise, it's about 1 person writing programs or 5 people doing things by hand.
The market is maturing. Take a look at a market that is similarly structured. Look at construction.<p>You have general contractors and then you have subs that work under them. A general contractor is a jack of all trades, master of none. Exactly what a full stack developer is.<p>This isn't the end of specialization. It's the beginning of project management steered by developers who intimately understand all of the work involved, even if they aren't as competent as the specialists.<p>Having a team consist of all full-stack developers is just stupid. Having a full-stack developer as the head on a project, with specialists on the team, is a great idea.
Pure developers are a problem because they will the information do their job well.<p>I go back a few years, to an old, waterfall-like job. I was handed work by an analyst, that was handed a task to analyze by an engagement lead, who might at some point talk to someone using the application. The work was always handed out on time, but the product often failed, not because it was buggy, but because nobody actually had much of an idea of what we were really trying to solve.<p>So us developers got much work done, but the work didn't actually solve real problems: The force is applied to the wrong vector. Then the product fails, and the blame game begins: Changes are too expensive, because the developers didn't know what the real invariants are. Queries are slow, because the database architect wasn't told about the questions that the database had to answer. The business analysis just wrote documents. It was all a big catastrophe.<p>That company moved to Scrum, the terrible way: Here, have this self organizing team full of specialists that don't know anything outside of their domain. They are still failing to this day, but now they blame each other in retrospectives.<p>So I'd much rather be stuck coding less, but then being aware that my code is actually solving a problem for someone, than just writing castles in the sky, because everything I've been told about what my userbase needs comes from a game of telephone.
You can't really draw a hard line between administration and development, in the end you are just building a system and the more you know about it from all angles the better design decisions you can make and the easier it is to fix issues.<p>I diagnosed a few problems over the years that arose as apparent issues with a web application but that I gradually narrowed down to things like network issues, or kernel bugs, or system misconfiguration, or database issues etc. Modern stacks are very complicated and the interactions can get really messy, it is close impossible for someone who doesn't understand the whole thing to find issues that aren't neatly isolated. I perfectly know that I do not have the full qualifications of a sys-admin proper, and would not like to do a sys-admin job full time, but in those particular cases a pure sys-admin would not (and often actually could not) find those issues. As an example, I can remember many situations where the application showed different behaviour depending on which application server you hit, and typically both "pure" developers and "pure" sys-admins were having a hard time finding the issue.<p>Good sys-admins anyway have to learn, at least, C programming, shell scripting, and network protocols and programming, so it's should not be a big deal to add some Rails/Django/Node to their skillset. Good developers anyway have to know things about hardware, networks, protocols and so forth. You do want to have people that are specialized in one or the other area and focus on it on a day to day basis, but you also do want to have people that can understand a particular aspect of the system top to bottom when such a need occurs, and it does happen quite often.
I think the idea is not necessarily to have developers run production systems, but they still should know what production looks like and at least have basic knowledge on how to configure all of the moving parts of the system.<p>Having developers be 'full stack' imho reduces the amount of "works on my machine". How would a developer test the software he/she is developing on if she can't at least get close to a production environment.<p>Automated provisioning is just one of the usual 'devops' things that I can't imagine how a proper software engineering process would work without.<p>I would say that at least 20% of the people I graduated with can create software that works mostly ok when they hit the little green "run" icon in Eclipse. They were however incapable of figuring out why their jar file doesn't work in tomcat on a linux server somewhere.<p>Usually it was because they're using a local database with root credentials instead of a remote Database with multiple users, they have some file stashed away somewhere in their classpath, they have some binary installed in $PATH that makes the whole thing work.<p>I think just wanting to be a developer and not know about the stack that your application runs on is like being a painter but refusing to buy paint because you can't see what going to the store has to do with painting.
As someone who has been doing DevOps for 20 years, since long before it was called "DevOps"...<p>First, DevOps has degenerated into a meaningless buzzword to rival "Agile", despite the good ideas and good intentions. Every day, I have recruiters looking for "DevOps". A couple of years ago, they'd never heard the word.<p>Second, DevOps is actually getting strongly biased toward Ops, often to the exclusion of Dev. In the eyes of recruiters and much of the industry, it's become synonymous with "Chef/Puppet/Ansible automation", a set of automation tools. That's stupid.<p>Third, and this is what matters to me... DevOps is (or was meant to be/should be) more about organizational structure than skills. As the author points out here, specialization is good and necessary. But specialization comes with bureaucratic compartmentalization that makes working across org boundaries very difficult. When you have to climb four or five (or more) layers up the org chart to find common management for both the dev and ops sides of a project, then the dev team has no authority over and very little way to communicate with ops, and vice versa. For most large organizations, the dev/ops separation is necessary - developers get locked out of production systems to keep them from legal exposure to customer data (HIPAA, PII, etc), and to keep them from accidentally or intentionally altering production in a way that it might break.<p>Read Gene Kim's excellent quasi-fiction book, <i>The Phoenix Project</i>. It covers a lot of the issues of DevOps as fixing communication patterns in large organizations. You'll see how little of it is about tooling or "full-stack", and how much is about clearing bureaucratic obstacles to effective communication.
I found it very interesting that Facebook apparently hired programmers for all its roles in the early days - even e.g. the receptionist. I think the point that this article misses is that a 'devops' person - that is to say, someone with both sysadmin and development skills, whichever side of the fence they originated on - <i>can do the job better</i> than someone who is "just a sysadmin" and incapable of programming. When you look at modern ops infrastructure like Puppet, you're looking at programs, written in programming languages, and it's foolish to pretend otherwise. So like it or not, you need to hire someone who can program to manage it. If you imagine you can get a cheaper non-technical ops person to handle this and save money, you're going to get inferior results.<p>I think this is going to happen to more and more careers. Already a profession like surgery or piloting a modern airliner is starting to require some of the skills we think of as programming. Software is eating the world - that doesn't make domain expertise irrelevant, but it means you need people with domain expertise <i>and</i> programming skills. That applies to non-programming roles in the software industry just as it applies to other industries.
"All too common a question now, can you imagine interviewing a chef and asking him what portion of the day he actually devotes to cooking?"<p>Yes. Chefs also do shopping, menu planning, prep, hiring, firing, marketing, and schmoozing with patrons. Source: I know a chef.
This article is pretty ignorant.<p>I don't think most developers have the capability to be sysadmins or QA. Vice versa, too, quite often. Joe developer ain't <i>that</i> special.<p>Devops is about taking moving the infrastructure into its own configuration-managed artifact, taking lessons from programming and computer science, and coming out with its own engineering rigor.<p>If you want your devs to operate builds/infrastructure/etc/etc, that's fine, but devops that ain't. That's called "many hats".
The author couldn't be more off-base is his understanding of how devops came to be, and his attitude is exactly the kind of cost-ineffective developer behavior that led to the partial unification of development and operations to begin with.<p>It has nothing to do with limited startup resources, and everything to do with <i>managing externalities</i>.<p>Specifically, developers have an enormous amount of control over the stability and deployability of their software: technical decisions made at almost all levels of the stack <i>directly</i> and <i>significantly</i> impact the costs of QA and Operations.<p>The people best suited to automating deployment and ensuring code quality are the people writing the code.<p>If you entirely externalize the costs of those two things , natural human laziness takes over, and developers punt bugs and deployment costs to the external QA and operations teams, ballooning the overall cost to the company.
This is an interesting rant. I had never seen DevOps as being "for" developers. My impression has always been that it is sysadmins quest for a high degree of automation and streamlining that allows them to manage hundreds of systems without waking up in the middle of the night sweating. And when you're looking for a sophisticated tool to control something, you inevitably find yourself writing software.
Because of an abnormal learning style, severely dyslexic, I have never fit in to corporate environments. Looking past the egregious spelling errors, being a slow learner isn't a winning talent in a job interview. As a result, I've fallen in to the trap of full stack (jack of all trades) developer consultant for a little over a decade now. I never got very good at anything in particular. Thus, I have battled with burn out for many years now, and am passively seeking another careers outside of Internet technologies. Point of the article is close to home.<p>The burnout aside, there is a plus to someone being proficient at many related tasks; having a somewhat in-depth knowledge of how all these technologies come together. The point is not all jobs require the best, most expert techniques. As in the case of the jack-of-all-trades carpenter, as long as he knows when to call the specialist, he is still getting the jobs, as am I.
The author is missing the fact that good developers can actually <i>automate away</i> a lot of those "lower on the totem pole" roles, or at least reduce the amount of repetitive stuff down to the point where the remaining work is quite abstract and basically just more programming.<p>This isn't counter to specialization -- in a big organization, people are certainly still going to specialize. But the "DBA" equivalent people are just programmers who have fresh expertise on the storage layer, and the "QA" people are just programmers who have expertise on the automated build and test systems.<p>The dentist analogy doesn't hold in software. A dentist handling secretarial work is just an expensive waste of time, due to comparative advantage. But a programmer <i>replacing</i> secretarial work with automation often reaps big long-term dividends.
The OP makes a relatively uncontroversial point (that people will be specialized, and better, at a finite set of skills)...so I think "killing the developer" is a little dramatic.<p>However, I think as with most things that involve computational thinking and automation, this is not a zero-sum game. A developer who can apply deterministic, testable processes to server-ops may be able to reap an adequate amount of benefit for significantly lower cost than a specialized sysadmin. In addition, the developer is augmenting his/her own skills in the process. Yes, that dev was not able to focus all of their time on...whatever part of the stack they are meant to specialize in...on the other hand, the time spent studying dev ops is not necessarily a sunk cost.<p>For my own part, I've tried to stay away from sys-admin as much as possible...but when I've been pushed into it, I've gotten something out of it beyond just getting the damn server up. For example, better familiarity with UNIX tools and the importance of "text-as-an-interface"...which does apply to high-level web development...nevermind the efficiency you gain by being able to stay in the shell when most appropriate (rather than, say, figure out how to wrangle server commands in a brittle capistrano script).<p>But hell, even the end product itself, just being able to deploy a server with some confidence...is kind of empowering. For me, it opens up new ways to run scripts and jobs...It sounds dumb and maybe it's just the way my brain poorly functions, but the concepts of server-oriented architecture become so much clearer when you can spin up different machines to play with and experiment with delegation.
I don't really think a good developer can replace a good sysadmin. The reverse is true too, this is not a flamebait! :P<p>I don't see "DevOps" as a way to replace some roles - but as a way to make everyone work better together. Instead of living each in their own bubble (and in my - pretty limited I admit - experience it always) everyone has to know, at least a little, what someone else does. It really helps everyone at the end of the day.
And the developer can keep coding without me screaming at him because he placed the database connection string in a configuration file that sits inside a .jar that sits inside a .war and so on.
In my own experience I don't think developers were ever pushed to become devops (as the article asserts).<p>Instead, about 40% what was called 'sys admins' were pushed to become devops. The 'sys admin who knew cfengine' became a 'devops person who knew ansible'. Deploys and cloud APIs just became another thing to automate.<p>The bottom 60% - the shit ones who got paid 120,000GBP to copy paste commands they didn't understand from word documents into Solaris 8 boxes in 2010 because they couldn't actually automate anything - left the industry.
I'm a terrible system administrator. Everything I've learned about it has come from necessity because <i>startups</i>. I don't want to be a system administrator and have no desire to be good at it. So I learn the minimum I need in order to get it to do what I need to do and hope that I've done it right.<p>I might only be slightly better than someone who's new to system administration only because I've written system-level code and understand operating systems and things of that nature.<p>However a good system administrator understands the entire architecture from a holistic point of view. They know the compiler switches to use, the run-time switches to tweak, the security implications of various configurations and all of the other details it takes to keep a cluster secure.<p>I often work well with a good system administrator to debug and optimize workloads due to the overlap in our skills. I find this to be the optimal relationship.<p>Learning and practicing system administration takes away from my ability to learn and be a better programmer (and the opposite is true as well). I don't know about most people but I find I can't be good at both. And I know which one I'd rather be better at (programming).<p>I don't think the author has hit the nail on the head but I agree that effective teams can't expect one person to manage an entire application from code to managing a secure deployment.
DevOps is a rather overloaded term at the moment. I've seen it refer to any of the following:<p>- Encouraging collaboration between your Dev, Ops, and QA teams, with some cross-training so they can work together better<p>- Merging those teams under the same manager to try to improve that collaboration<p>- Making your developers responsible for all those roles, and never hiring a dedicated sysadmin or QA engineer<p>I personally think any of those is <i>fine</i>. Startups will err toward having fewer people and all of them be developers, while in a larger company it probably makes sense to specialize more and make "DevOps" mean close collaboration between those teams.<p>Of course, I've also seen "DevOps" as a job title for what would have previously been a "system administrator" or "site reliability engineer", and I have much less patience for that. :) Occasionally I see a job posting for a role that is actually dev + ops, but most often a "DevOps" posting means "we need a sysadmin, but we don't think sysadmins are cool enough to work here."
I work at a large enterprise company and for a while I was part of the DevOps team as a software engineer.<p>Some of our goals included:<p><pre><code> - Building the continuous integration/delivery pipeline
- Moving codebases from one source control system to another
- Creating programs/systems to automate tagging of builds
- Automating the deployment processes of multiple applications onto non-production servers
- Implementing and maintaining the functional testing frameworks and server grids
</code></pre>
The more I look at these goals, the more I realize that the developers who work on feature delivery should not worry about these anyway. So I disagree that DevOps is killing the developer. In fact, DevOps is helping the developers focus on what's important.
It's definitely more complicated than the post implies, and it most definitely is NOT only for startups. Soon after I started out - at a mature company already making plenty of money - I was a full-stack engineer. There were a number of reasons:<p>- New development happened sporadically; day-to-day work was a mixture of maintenance development and admin work<p>- Culture. They started with a small team, and never grew it. Having more people didn't fit with the way the company saw itself.<p>- Difficulty hiring specialists. Various reasons for it, but still valid.<p>At another company I worked, there was a lot of "integration development", where your time was spent connecting various internal and external systems together, software-wise, developing tools that support systems work (i.e., tools for sysadmins), and developing other tools that are for end-users, but have a heavy systems component (management software for DNS, for example.) That meant understanding each part of the stack from both a development <i>and</i> system perspective. Another is interest level. A few of us were full-stack developers because we were studied more than just development in our free time, and we took that with us to work. This wound up benefiting everyone. This also led to us being the go-to people (that is, the top level of internal support) for both more specialist internal developers <i>and</i> sysadmins, as we had deep knowledge of the internal systems from the bottom to the top the stack, and the knowledge and experience to explain and troubleshoot problems to people in those other roles.<p>The author is correct in that this may be more /common/ at startups (the previous startup I worked at did in fact operate as the post describes), and is sometimes done out of necessity. It is by no means limited to those environments, however.<p>Edit: I'd also separate DevOps from full-stack engineer. They sound like the same thing, and if you squint from far enough away, they look like the same thing. The terminology may be fluid, but I think (as some other comments state), that DevOps is more centered around "coding for systems automation", whereas "full-stack engineering" is a much more general term which can encompass a variety of different types of tasks in different environments with <i>varying</i> levels of knowledge/experience in the different parts of the stack/tools.
I'm not so sure your usage of the term "full-stack engineer" is accurate here. I consider myself full-stack, but I don't know half the stuff about Chef that our DevOps guy does and I'm ok with that. To me, a full-stack engineer means that I'm capable of coding both things that make magic happen in the browser and things that make magic happen on the server side of the application. It doesn't mean I'm a jack of all trades.<p>That said, I don't think that the increased prevalence of DevOps is bad. And I don't think it means "everyone is doing everything" either. It's a new role that is borrowing elements from both development and operations. Not one person doing both roles.
I think DevOps is very much a web-application thing (where web-application includes intranets, ... basically anything that speaks tcp). I seriously see the need there. I still remember the days when developers would build an application that worked on their system and then handed it off to Ops, hoping to never hear back from it. I interviewed developers that could not tell me which webserver or application server their company was running in production, even though capabilities and performance characteristics differ wildly. The DevOps role is trying to bridge the gap, it's the jack of all trades, that knows enough of every piece of the system to debug issues that happen at those boundaries. Is this DB problem a machine issue, do we just need new hardware? Is it an application problem (n+1 queries) and where could those be? How can I structure my stack in a way to hand of tasks to the place where they can be solved efficiently. The implementation of those solutions can be handled by domain experts, but someone needs to keep all those pieces from breaking apart at the seams. In the web world, that's the DevOps.
I personally think DevOps is terribly misunderstood. I think the best way to describe DevOps to that it broke down the traditional Ops/QA/Developer roles into different roles, namely SRE, Platform Engineer, and Developer.<p>Developers take on the new responsibilities of being able to independently deploy their code, instrument and monitor stability and own test/QA.<p>Platform Engineering is about building a robust infrastructure and the tooling needed for Developers to handle the new responsibilities. This includes packaging, monitoring, deployment, AB testing, etc.<p>Site Reliability Engineering is about dealing with fires outside of the codebase. Hardware failures, network connectivity issues, etc.<p>I don't think any of these roles becomes a "Jack of all trades, master at none" situation. It does, however, cut out some of the more typical engineering roles. While developers just took on additional responsibilities, QA engineers and traditional Ops are forced to repurpose their skill set.
"The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were."<p>DevOps is not about startups, DevOps is about avoiding the pitfalls of big companies who completely fail and leave all of their employees jobless by focusing on all of the wrong decisions and initiatives.<p>It's about outlawing cowboy coding and other bad habits that people pick up as hobbyists, and intertwining business and technical objectives reasonably.<p>Why is a full-stack developer important? Why is eroding the difference in responsibility between Dev, Ops, and QA important? Because traditionally along these boundaries have been opportunities for individuals to absolve themselves of responsibility. More than anything, DevOps is about not living in that world anymore.<p>Some people won't survive outside that world. Those who want to will read "The Phoenix Project" by Gene Kim.
As someone who is moving into more of a devops role from a pure development role. Here is my learning list so far.<p>1. The TCP/IP Guide: A Comprehensive, Illustrated Internet Protocols Reference. <a href="http://www.amazon.com/The-TCP-Guide-Comprehensive-Illustrated/dp/159327047X/" rel="nofollow">http://www.amazon.com/The-TCP-Guide-Comprehensive-Illustrate...</a><p>2. Advanced Programming in the UNIX Environment <a href="http://www.amazon.com/Programming-Environment-Addison-Wesley-Professional-Computing/dp/0321637739/" rel="nofollow">http://www.amazon.com/Programming-Environment-Addison-Wesley...</a><p>3. A systems programming language- I choose golang.<p>4. GDB/makefiles<p>5. SSH, The Secure Shell: The Definitive Guide: The Definitive Guide <a href="http://www.amazon.com/SSH-Secure-Shell-Definitive-Guide-ebook/dp/B006H4GA0M/" rel="nofollow">http://www.amazon.com/SSH-Secure-Shell-Definitive-Guide-eboo...</a>
From my point of view, this is due to lack of tech education. There just are not enough people graduating/learning the technical skills necessary for medium to large size software companies to employ.<p>I am a manager/developer/architect at a relatively large software company, and we have to task our developers with devops-type tasks constantly. Not because we want our developers spending time outside of coding, but because for lack of ability to hire the competency needed.<p>As you stated, good developers can generally perform these tasks so when you have nobody lower to perform them they become a weight on the developers' shoulders.<p>No it isn't necessarily fair, and yes, I believe in the future specialization will come back as the education system starts to realize there are many jobs in tech, not just a Comp Sci degree jobs.
DevOps, at least imo, is not about technology. It is about culture, and applying practices to speed up the various loops across organizational groups (marketing, sales, developers, ops). Of course there will always be trade-offs, if you don't have the budget to hire both an expert in the technologies that, say for example, speed up configuration management, and prevent snowflake servers AND someone to develop the code for the product, the person you do hire, will have to either pull double duty, or the org will have to plan for the fact that it is probably going to be doing "stuff" slower.
"If you are a developer of moderately sized software, you need a deployment system in place. Quick, what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant, Docker. Now implement your deployment solution! Did you even realize which systems had no business being in that list?"<p>I'm not understanding this, you can deploy with Puppet, Chef, Salt, Ansible, Vagrant and Docker. With Vagrant you can deploy a bare image and use Chef (or one of the others) or you can just deploy a fully setup box file (like with Docker).
Are any other companies besides (well funded) startups actually hiring people as "full stack developers"? I mean, yeah, it's normal to look for candidates with <i>full stack experience</i>, but not to hire them in an actual <i>job position that requires them to do full-stack work</i>... it's a big difference.<p>(sorry if the q is off topic, I don't really understand what OP is ranting about with the devops problems, so I'm referring to the only part of the article that makes any sense to me, that about the full-stack devs...)
It's not so much that DevOps is killing the developer as it's the expectation that you can have your regular general purpose developers do your DevOps on the side.<p>I can relate to the downsides pretty well - I'm the only developer in my group and my job is mostly to develop web apps, but the IT side doesn't have much knowledge of modern tools - they live in the era 'just use Drupal and Apache' so I'm often the one who ends up having to figure out the deployment of the applications I work on (and also help with random problems from their OTB apps) and such.<p>To be honest, I don't mind when it's DB stuff because I'm pretty comfortable with it and have plenty of background with various SQL DBs, and it's not a time black hole, but when it comes to configuring servers and deployment I hate having to deal with the DevOps because there are so many pieces I never have the time to really become comfortable with them all and I feel very inefficient. Accomplishing something doesn't always take long itself, but it can require spending a day of reading wikis and documentation to accomplish something simple when you've got a lot of moving parts. And the worst part is that you have to deal with the DevOps bits so infrequently it's like you have to relearn them each time.
+1 I don't agree, but an interesting article anyways.<p>I have worked with DBAs who had PhDs and could have still done development, but they moved past that to concentrate on schema development, scaling, etc. Toss into the mix modern programs of master data development inside organizations and people who are characterized as DBAs have a very sophisticated role.<p>Also, for small projects, devops makes all the sense in the world to me. Deeply understanding how an entire system works is valuable.
Full-stack doesn't mean being a 'god of all things', except that it does in fact mean exactly that.<p>It means that no part of the stack goes not understood, or .. all parts of the stack should be understood and controllable by the developer.<p>Guess what - this doesn't produce 'worse developers' .. it produces better stacks. The fact is that the fracture and delineation between the <i>cultures</i> of code, rather than the actual code itself, is the true danger. Getting 'the db guy' to talk to 'the front-end guy' is a posers game. Get rid of it.<p>Instead, get your guys to move across the tree of responsibility that a full-stack approach requires. In truly professional development, there is always going to be new things to learn and new things to use to manipulate the machines - this is turned to an advantage in the full-stack approach since it requires an adherence to a real policy: you just don't care about 'the culture of the tech', you read the docs, you write the code, you read a hell of a lot of code, and you don't really put limits on what you can and cannot understand; those limits are instead expressed in working code, at any layer of the stack. The 'cultural excuses' for why things are borked 'over there' are no longer relevant in this approach; if you're a real full-stack guy, you'll get along - source or no source, but hopefully: mostly always with the source.<p>It is a political approach, but it works - especially in industry. There are a few other principle-based disciplines in the world where an 'all-embracing' privilege exists, in this case we are lucky that computers, as grand engines of word and significance, are a form of literature. Study well, and study all .. to the end!
> ...the old "waterfall" develop-test-release cycle is seen as broken.<p>Waterfall is not just <i>seen</i> as broken, it was always broken.
Overspecialization is the source of a organizational smells in a lot of medium-sized engineering companies - a lot of times it's better to have generalist engineers with some specializations in what you need to do than a bunch of specialists for a bunch of reasons, among them:<p>- (pure) Specialists often don't understand how their decisions affect other systems (and middle management or communication isn't always a solution)<p>- (pure) Specialists tie you to a particular technology when in reality you may need to evolve to use other technologies.<p>- If you need a bunch of different specialists to get something simple done (perhaps something you don't do all the time so don't have a process in place), just because they are siloed, it's a lot more complex and usually ends up badly designed (because it's harder to be iterative across teams). Generalists can get simple things done that require different skill sets to accomplish.
It seems like the OP is advocating the surgical team [0] approach to software development. This seems very consistent with DevOps. Have a group of specialists that are good at automating operations surround the key developers.<p>[0] <a href="http://c2.com/cgi/wiki?SurgicalTeam" rel="nofollow">http://c2.com/cgi/wiki?SurgicalTeam</a>
I think there is a lot miss understanding here, to me DevOps is not just automation (we've had that for a long time, Perl, cron, cfengine etc).<p>It's much about applying the same processes you would apply in development to Ops. For example committing changes into version control and only using that, not live patching things, much like you wouldn't live edit a website.<p>Also being able to spin up new servers based on a config and not requiring manual config to get it going. Automation alone does not get rid of 'snowflake servers' <a href="http://martinfowler.com/bliki/SnowflakeServer.html" rel="nofollow">http://martinfowler.com/bliki/SnowflakeServer.html</a><p>Also, it can be about letting developers get the exact same environment for development/testing at no additional time cost - which in turn makes it more like that code changes can go live without problems or delays.
IMO, Amazon gets 'DevOps' right. It's mostly just called 'ownership' over in Amazon. (source: I used to work in Amazon as a systems engineer)<p>You still have specializations - SDE's, systems engineers, DBA's, etc. However, if you write code and it ends up in production, you are responsible for the proper function of that code in production. As a friend of mine put it in terms of developers who don't want to be on-call: 'what, you don't trust the quality of your code?'<p>DevOps is simply a nicer way of just saying, "own your damn code." The corollary to this is that the organization must help you in getting to that state where you can effectively own your code - this means collaboration (so that you build maintainable systems) and building tools that enable fairly frictionless code ownership.
I disagree with the central thesis of his argument that being generalized is a detriment and that operations and other factors should remain siloed at your average large company. I've worked at both large companies (10k+ employees) and small companies and many things in between.<p>In general a Full Stack DevOps oriented approach always tends to be more efficient. You have less monolithic hard to maintain applications because you force the teams to be small and agile. People will have their specialties (operations, backend, frontend, etc.) but still remain generalized enough to have an idea of the big picture. If your application has issues where the frontend developer doesn't know the general idea of how Varnish and Nginx in your stack are setup then perhaps your application is too big and complex.
The description of DevOps from the article describes what I do at a large multinational software company really well. In our project we have 5-7 developers who test each others code and functionality and one DevOps who does build/test environment, databases, release management, change management, impact analysis with regression testing, and fixes bugs, but rarely develops new features. It's being done not because of the startup culture, which we do not have. It's done for efficiency. Every request to the DB team, even minor, will take at least three days to process. We do not have so much time to waste, so we have to do everything ourselves, unless it's something that requires an actual expert in the particular topic to accomplish.
I couldn't disagree more about his portrayal of DevOps. There are companies misusing any and all paradigms of development. Google "cowboy coding agile" to see what I mean.<p>When I think of DevOps, I don't think of having everyone know everything. Ops staff have to know enough code to write deployment automation scripts and dev staff need to know enough system administration to step up and help when the monitoring or deployment automation breaks.<p>It's meant to be a partnership to maintain a system rather than the old practice of throwing code over the wall. It really harms morale to have the developers all enjoying wonderful weekends while ops is on red alert because app changes they don't understand broke everything in production.
This article pretty much resonates with my experience, except that my employer (a 4 person established company) can't afford to hire a QA and sysadmin alongside my role as developer.<p>The bad side is that doing this DevOps role across multiple projects at the same time can lead to burnout, and I think I came close to that in the last few months.<p>The good side is that I've learnt a great deal about how to architect and deploy distributed web systems, how to do end-to-end testing, and how to effectively run the ops side of the business.<p>It's a mixed bag, and the burnout is the worst aspect of it, as well as the case where people are forced into situations where they are way in over their head.
Related: I'm curious if 'full-stack' devs find themselves making more money than 'half-stack' devs. After all, you're doing more as part of your job, and you're a chimera.<p>If not, then aren't you being taken advantage of?
I love this article and couldn't agree with it's central premise more. I can think of no other industry that demands an individual wear as many brain intensive hats as that of the developer today. These jobs which used to be distributed are quickly becoming the baseline for how an individual applicant is judged. I for one believe that if we focus, we can become a true master of skill AND compile that with understanding of the "whole stack" but never being forced to maintain more than our fair share of that stack.
In the same way that being able to cook dinner doesn't make me a chef, while it's true that a developer can be a sysadmin, QA or DBA, they won't do a very good job.<p>To suggest otherwise shows a complete lack of understanding of the nuance of those roles.<p>As for suggesting that "DevOps" is killing the developer - the only thing "DevOps" is doing is polluting our common language with a term that doesn't actually mean anything concrete. It's perfect consultant speak.
As a developer of course it's tempting to agree with the author's hierarchy. Masters of the IT world! But really it's over-simplified. As a dev with many years of experience there's no part of the stack I can't work in and figure out what I need to do. But that doesn't replace actual operational experience and oversight. You make do in startup or small team because you have to, so I guess ultimately I agree with the piece.
Before you start to complain, I am a fan of collaboration but Devops might just be the best joke ever! The truth is it means something different to every person. For years I have defined Devops as Engineers trying get Ops out of the way and pushing forward with out those pesky sys admins. Your think I am over blowing it? I have been in the Silicon Valley for the boom of Devops and I hear it all the time “We dont need ops, we can just have a developer do it”. The number of new startups who use AWS thus allowing them to forgo a system administrator never ceases to amaze me. My biggest problem with this is your cutting the legs out from yourself, but your assuring me job security so maybe I should keep my mouth shut.
I have been a a operations engineer for over ten years now, and honestly developers and ops engineers have different ways of functioning. To me a good software engineer has long term focus, can get deep into a project and crunch on the same code for extended durations. Give a good coder a project that will take weeks or even months and they will put there head down and solve your problem. As a generalization these people do not handle interrupt driven work well, they also often do not handle high pressure situations well.
Operations people on the other hand do the majority of their work under massive interruption and constant pressure. Tell a operations engineer the site is down and they will not focus on what the origin of the problem is, they will focus on getting the product back online and come back to fully understand why. This does not mean they do not troubleshoot but they are trying to identify the immediate cause not the who or root. One might argue this is short sited but when your stuck waiting for someone to figure out why the web severs where started your killing your customer experience. I would argue restart the web pool get the product back online and then start to look at root cause once you have identified the customer impact problem and completed the shortest path solution.
When you start off by having your engineers run operations you never allow new ops people to start from ground up and develop their skills, learning the pain points as the system grows thus ensuring when you grow to the point that you need a operations engineer the is a shortage of trained people available. One might argue that some of the developers that started the company by running operations will become your operations engineers and will cover this but to me thats like using a vice grips to remove a bolt.
This is a really badly written article but I know the point he's trying to make.<p>DevOps is stupid because it fractures expertise and makes it more difficult to get work done. By splitting up roles you get more domain-specific knowledge, have more time to work on a single problem, and provide support for your co-workers who also have different specific roles. I would much prefer to work with specialists than generalists.
While I get the need for page views, I really wish problematic aspects of any tech movement could be discussed in a way that actually improves things rather than tears them down.<p>You hate <i>xyz</i>? OK, but apparently <i>xyz</i> has enough merit to get the attention of quite a few people, so let's identify the problem areas and make <i>xyz</i> better rather than resorting to hyperbole and melodrama.
The role of DevOps is to help developers work more efficiently, not give developers more work to do. An example of this could be a TFS administrator who works on TFS build template changes and configuration to make the build and deploy process as automated as possible. Nothing to do with being a startup, or trying to get more work done with fewer people.
The problem with DevOps is that it's a meaningless term. Look at all the comments here, all starting off with what "DevOps is," or "Devops isn't." Instances of people arguing past each other based on different interpretations.<p>You can't have a fruitful discussion when everybody uses it differently.
Every place I've seen DevOps, seems that developers bear the brunt of the work - learning the infrastructure and understanding deployments and such. I've never seen Ops people learning the codebase or even the software architecture / data structures.<p>Maybe that wasn't true "DevOps"?
<i>>Large companies love this, as it means they can hire far fewer people to do the same amount of work.</i><p>But they cost much more as well. Following that logic, it would be in the interest of hospitals to make "full-stack-doctors" clean toilets.
"As a sysadmin, I would like developers to pay any damn attention to what happens in live before deploy without me having to cattle-prod them into doing so after deploy, so I don't have due cause to set them on fire."
Interesting. I always thought of "Full-stack" developers from a web perspective being capable of coding from the client to the server. Never thought of them being devs that do ops also.
I'm glad to see that most people here are replying objecting that the writer's view or definition of a DevOps is not what seems to be the most accepted/popular view.
The author needs to read or reread The Mythical Man Month. Even in a large organization there are important benefits to having fewer people on a team. Even if this means that someone is sometimes doing work that they are overqualified for.<p>He makes some good points but he misses the value of needing fewer people to accomplish the same thing.
The guys is missing the point by 10000 miles. DevOps is about getting together with devs and focus on best practices from day one. Keep in mind that you need to deploy your software in a timely, reliable manner, that is going to run on a network of computers, where part of your system might be down or showing elevated latency. I could not believe how non-trivial were these things until I have seen it with my own eyes that most of the software out there still has the following assumptions: zero latency network with unlimited bandwidth, uptime for servers is 100%, memory and CPU is something you can keep adding to computers. My experience is that when people are talking about DevOps what they really mean is site reliability or systems engineers, people who understand networks and operating systems in depth and can read and write code yet they primary focus is not deliver customer facing services, more like develop tools which can improve deployments, automate error prone processes and optimize/tune operating systems for better performance. In my humble opinion is that developers should be aware of the architecture of the system they are writing software for, but it seems we need another breed of engineers who are more focused on that as of today. Lets call them DevOps... :)
It seems to me that the OP's real objection isn't to "DevOps" but with the reality of the software industry. He's upset that developers often are asked to do "lower" work. I find that a bit simplistic on his part. If anything, DevOps at its best is about elevating the ops work (by recognizing automation possibilities).<p>The issue is that employers are horribly inconsistent. They demand specialism in hiring, but refuse to respect specialties once they've pulled people in. Thus, you end up having to interview like a real computer scientist, only to find that most of the work is mind-numbing for a serious programmer, but that there's no one around at-level for it because "we only hire A players".<p>DevOps didn't do this. The problem is the industry, not one concept.
tl;dr: there's a problem here in software, and I don't know what it is, lets fix it by acknowledging its existence and then going back to what we normally do.