The reason that planning software development is bullshit is that you simply cannot know all the little details because in software development you're always doing something you've never done before (because if you did you could just copy-paste your previous work).<p>To use the 'going to the supermarket for milk' example. I could make a fairly accurate estimate for that because I've gone to the supermarket hundreds of times. I know all the different things that can go wrong and account for them, because I've encountered them before. The elderly person who wants to pay cash and has a bag full of coins. The guy who finds a 10 cent discrepancy on his €92,30 receipt and has to argue for 5 minutes with the cashier (while the row behind him keeps growing). etc. etc.<p>Now imagine that today is your first time going to a supermarket. In fact, before today you had never heard of the concept of supermarkets, or milk for that matter. How good will your estimate be ?<p>The only time you can make a decent estimate is after you've finished. Or to put it differently: making an accurate estimate is possible, if you're going to accept that making the estimate is going to take a long time, but I can't tell you how long.<p>Development time estimation, and every methodology that attempts it (I'm looking at you, Scrum), are little more than desperate attempts by managers to feel in control and relevant.
In my experience talking to hundreds of software engineering managers, running a software project management business, and being involved on software dev teams for 20+ years, there are two reasons why teams don’t ship or ship poor software.<p>1. Poor engineering planning. The way to do this well is to break down a feature until they’re at about half-day sizes tasks. Identify any obvious risks or ambiguities with the technical approach and communicate them to Product and figure out how to de-risk them. Communicate costs to Product so PMs can make cost/benefit trade offs (eg “if we built it the way you specced it it’ll take two weeks. But if you make these trade offs it’ll take two days.” As a Product Manager I might not want the feature if it costs two weeks but do if it costs less than 5 days. Without costing the features in dev days I can’t makr this trade off).<p>2. Failure to make trade offs that drive towards shipping software. The way you ship is by declaring it’s done, even when it’s not really done. When code is being written unexpected issues always arise. It’s Product’s job to make hard trade offs that drive towards shipping. Eg “that’s an edge case bug, let’s punt it to v2” Or “let’s cut that nice to have feature and squash the showstopper bugs and ship”.<p>People bring up impossible deadlines set by management. In my experience most deadlines are movable and a strong PM/Eng team can convince management to push a deadline back. And if a deadline can’t be moved, it’s all the more important to know how much features will cost and make hard trade offs to get the product out the door.
Development time estimates are too short because there's no penalty for management underestimation. If programmers were paid like the movie industry, estimation would be more reliable. Time and a half after 8 hours, or after 5 days. Pay for 4 hours if you're needed briefly during off hours. Double time on Sundays. It's not just telling people it's a "crunch".<p>Movie scheduling and estimation is organized enough that you can buy a completion bond. If the job isn't completed within an error margin over cost and schedule, the completion bond company pays. If there's a cost overrun, the completion bond company has the authority to send in their own people to monitor things and if necessary, to <i>fire the director</i> or anybody else, and take over the production.[1][2]<p>Completion bond companies do project cost and schedule estimation independently of the production. They have the data for this, because they have the full accounting data for hundreds or thousands of films. They watch project progress carefully. "Our monitoring process requires the production to email us daily shooting progress reports and a weekly cost report in order to properly evaluate the progress of the film. FFI also makes periodic visits to the shooting area."<p>This prevents directors and producers from low-balling their estimates. Underestimation leads to unemployment.<p>A completion bond typically costs about 4% of the film budget.<p>[1] <a href="https://www.eqgroup.com/completion_bond/" rel="nofollow">https://www.eqgroup.com/completion_bond/</a>
[2] <a href="http://www.filmfinances.com/services/evaluation" rel="nofollow">http://www.filmfinances.com/services/evaluation</a>
Do you really want to know why <i>some</i> development teams struggle to deliver on time, budget or at all? Because the majority of software projects out there are utter useless crap, commissioned by people who have no idea what the heck they are doing and focus on the smallest stupidest details, before even getting a decent amount of users or even <i>wondering</i> whether users would like those changes (before implementing them). People who, once the money runs out, will make the project crumble, frustrate developers who have to reimplement the same stupid piece of logic 20 times because "that button looks too big" or "this would be a really cool animation to have" while everything else goes to shit.<p>We get into software development because we expect it to be a creative, challenging and fun profession that creates value and yet most of us answer to clients or employers who expect us to spend 80-90% of our time working on boring, senseless stuff. You want us to do that? Great! But don't expect high quality and on time delivery.<p>The real reason behind delays is that we just don't give a crap about your "social network for cows" and we can't wait to save enough money to get the fuck out and either start a business, work for a decent company or start investing.<p>Apologies but it feels good to rant every once in a while.
So the author of this has a vested interest in their "solution" being the correct one, but I fundamentally disagree with it.<p>I agree with their hypothesis that we are bad at planning/estimating, but the solution is not spending more time on it, but rather less.<p>Firstly I really like this humorous look at the problem: <a href="https://www.quora.com/Why-are-software-development-task-estimations-regularly-off-by-a-factor-of-2-3/answer/Michael-Wolfe" rel="nofollow">https://www.quora.com/Why-are-software-development-task-esti...</a><p>You cannot estimate what you don't know, and you don't know what you don't know and no amount of upfront planning will surface those. In their own example, you can't know upfront that there is a road closed or an accident or that only one cashier is working until you get to that point.<p>In my opinion the most reliable solution is to break the work down into small pieces that deliver value (ideally less than a week's work). Prioritise and then deliver the first piece. Have regular reviews/checkpoints with stakeholders to decide whether any value was delivered, are there new learnings that we need to apply to the rest of the project, or indeed new 'pieces' that we've discovered now need doing, what is the next piece we need to do and is it worth continuing.
In my experience, deadlines set by higher-ups long removed from (or never having) the technical chops to be determining the deadlines in the first place. VP's and Directors are often the ones dictating the direction, which is great, but then also introducing deadlines, with helpful input from Directors who ALSO haven't touched code in many years.<p>Generally that results in one of two things:<p>* Product delivered on-time with massive technical debt.<p>* Product delivered late with massive technical debt.<p>Frankly I don't know if adding front-line engineers to the deadline decisions is going to make the issue better or worse, but fundamentally having non-technical or formerly-technical people defining deadlines definitely doesn't work.
Having been involved professionally in software development for about 14 years now, I have to say I respectfully disagree. More planning does not result in better predictions. In fact, it often results in worse ones. That is just my empirical observation.<p>My best guess as to why, is that there are managers involved that attempt to negotiate the planned delivery time down. They do this, in part, because it's hard to get devs to work late or on the weekend when the project is on schedule, but easier to do when there is obvious risk of falling behind schedule. So, from their point of view, the best way to get the product delivered early is to get the schedule made too optimistically.<p>Not saying they SHOULD do this, or even that they are consciously thinking this way, but it's what the situation incentivizes them to do, and it's what normally happens. The gut level immediate answer is based on past experience, and the long drawn out meeting produced, System 2 answer, is based on management bargaining the developers down to a shorter timeline.
I think software gets a bad wrap when it comes to budget and time overruns. I think we see a lot more software projects go over budget, over time, or fail mainly because they're more common than any other type of complex project.<p>You don't have to look far for a failed project of another type. While not necessarily a "project", 50% of all businesses completely fail within 5 years [1]. How many Kickstarters have you seen deliver on time? Even if you limit the criteria just to experienced people working in their field? One I've been following [2] is just a book that was supposedly already complete before it was funded was supposed to start in June 2017, and ship in Aug 2017. It's now June 2018 and it still hasn't shipped, that's 600% over time and counting.<p>I'm not claiming that's scientific proof software estimation isn't worse than anything else. But most of all project's estimation is done behind closed doors, so we'll never get a good feel for how bad/good things really are.<p>[1] <a href="https://fitsmallbusiness.com/small-business-statistics/" rel="nofollow">https://fitsmallbusiness.com/small-business-statistics/</a>
[2] <a href="https://www.kickstarter.com/projects/pighixxx/abc-basic-connections-the-essential-book-for-maker/description" rel="nofollow">https://www.kickstarter.com/projects/pighixxx/abc-basic-conn...</a>
The thought occurs to me that no matter how much planning you do or how good at estimating you are, the development is going to take however long it's going to take.<p>You can plan your trip to the store to buy milk and figure out exactly how long it will take, but the only thing that actually matters is that you arrive back home, with milk.<p>If that milk is absolutely necessary, whether it takes 30 minutes or ten minutes to get it is really a secondary concern. If you spend five minutes getting a better estimate you've delayed the milk by five minutes regardless of how long it takes or how right you were about the timing.<p>I think we spend too much time thinking about time estimation when the planning we should be doing is figuring out what is so important that the time it takes to build is worth it even if the time estimates are off.
Higher up manager here. I fully accepted the #noestimates movement and it is a complete blessing for all the teams and organizations I've implemented it in. Roast me.
Another reason I'd like to add regarding why so many software projects fail is a really basic one, but is ultimately the reason why most software projects fail.<p>The budget isn't there.<p>For many, building software is a race to the bottom. I've worked at countless places, from "Wagile" places that run spiral methodologies mixed with agile, to fully agile agencies that deliver well but crumble the second a client gets pissy about something taking longer/costing more than it should.<p>In my view, the most basic problem in software is that we're committing to too much for too little, which is why I see development to be similar to working in a skilled trade. If you pay good money for a renderer, you'll get the outside of your house rendered nicely with good advice on what to use, what looks good. They'll also tell you how long it'll take, and if you say you want it sooner they'll tell you it'll either cost a lot more to get more manpower, or they'll decline the job. If you are cheap about it, you'll probably get someone that'll take longer than expected, will make a mess of the job, and you'll be left with something you're not entirely happy with.<p>A solid methodology will probably help with delivering software on time and on budget, but if you are unrealistic with either metric then it doesn't matter what methodology you use. You'll take liberties with it, decide that it's bullshit, and continue to cowboy your way towards a duct-taped mess of a solution.<p>It's something few want to talk about, probably because there isn't really a solution to it outside of:<p>* Paying a premium for a development team with a track record of recent success<p>* Having people that know the full software lifecycle be involved in all parts of the process<p>* Actually embracing the fact that when requirements change, to the point where budgets and timescales are flexible.<p>* Not joining the race to the bottom.
There are ways to deliver on time & budget but most developers are not going to like it neither are their "clients".<p>Use the frameworks exactly as they are intended, don't try to invent new solutions that aren't native to the framework.<p>Anything that takes you outside the beaten path in development is going to be a potentially infinite black box.<p>In other words, a lot of software engineering can't be put in timeboxes because it's actually R&D more than it's development and where each little step forward can add a potentially infinite amount of new tasks to be done or problems to be solved. Add to that the constant need to update, upgrade, improve, re-design and you know it's just not doable.<p>So the primary problem IMO is that we think about a lot of development as if it's something that can be put in boxes. Some can of course and the better and more solid the team becomes to better they are but the teams who struggle are mostly struggling because the expectations for what they are actually doing (inventing problem-solving) isn't matching up with what they are being paid to do (build)
Site is down, cached version: <a href="http://webcache.googleusercontent.com/search?q=cache:https://www.7pace.com/blog/software-development-planning-fallacy&num=1&strip=1&vwsrc=0" rel="nofollow">http://webcache.googleusercontent.com/search?q=cache:https:/...</a>
Having been through multiple month long planning "phases", that were eventually shown to be wildly inaccurate I disagree.<p>This is exactly what we used to do in the waterfall days. It didn't work.<p>The only way that's been compatible with me is to build something really small but valuable. So small that its hard to be disastrously wrong. Once you've released that value, build upon it.<p>Stakeholders tend be much happier as they at least have something they can use really early on.
Software is design, not construction. This is why it's hard to estimate.<p>You ask an architect to design a <i>new</i> skyscraper in a fixed three-month timeframe, good luck. It won't be what you want, or it'll have severe problems discovered during construction.<p>Software developers are creative workers, we have to accept this. Unless you're doing the nth iteration of basic CRUD/RESTful web app, or a trivial "display data from a database" app (note: the DB design may take more time, some of the controls on interaction will take more time, but the core of it will be the same) you can't reasonably estimate your time.<p>Once you know Ruby on Rails, making a prototype of a webapp is a rote task. You can knock it out in a known (from experience) time. Then you try to improve it, add new features, customize the backend (first time you've written a DB connector), things start going off schedule.<p>The only other way to have reasonable estimates of the <i>development</i>, is to spend a ton of time upfront (unestimable) designing the system before we touch the code. Ok, now the coding tasks are well understood, but you also just spent 2 years designing it. And, like the architect, if you rush this design part, problems will crop up during coding that will blow your schedule.
Upon reading the first couple of paragraphs I decided to test it on myself. I had to buy X, which is available in convenience stores (comvini, Japan). I estimated it'd take 5-7 minutes, but I was very surprised when it actually took 4 minutes! To be fair, this has to do a lot more with Japan than with myself. The same test in my home country (Spain) I would have had to guess 15+-5 min during day time. Sidenote: I was very surprised about the concept of "driving for milk", which I'm assuming is a very American thing.<p>I'm fairly realistic about software estimation which only came after a LOT of retrospection. It normally takes what I estimate, both personally and professionally. The hardest factor I've learned to include is the level of detail. For instance with my personal website [1] I gave myself a full Saturday because I knew I wanted high detail level but I had in mind the overall design. It took the Saturday +1h of a couple of improvements/bugfixes (under 10% error). With my current job I'm also under 10% error.<p>In the past I have been bitten a LOT about my unrealistic estimations, so the only solution I had to move forward was to learn from those and so I did. So now, from the article, I know that <i>my</i> "quick thinking" is around 50% of the project. I force myself to think a bit more and the details trickle down.<p>Another thing I've learned is that projects tend to fill as much time as possible (Parkinson's law [2]). So if you are told a deadline, half it! Put the half as your internal deadline, then the project will be just on time.<p>Finally, complexities are exponential, so learn to say no to unnecessary cruft. "A small change" might seem like a 1% change for business and for you initially, but it will more likely than not grow into a 10-20% change in the end. FFS that is why <i>the duck</i> was added in the first place, to avoid wasting time [3].<p>[1] <a href="https://francisco.io" rel="nofollow">https://francisco.io</a><p>[2] <a href="https://en.wikipedia.org/wiki/Parkinson%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Parkinson%27s_law</a><p>[3] <a href="https://rachelbythebay.com/w/2013/06/05/duck/" rel="nofollow">https://rachelbythebay.com/w/2013/06/05/duck/</a>
This is literally what agile is for. Step 1 (and this is the hardest part) is to negotiate the need for either a flexible scope or deadline. Accept that you're estimate for a fixed scope will just never be accurate and just make room to adjust. Putting "buffer" into your estimate is also not the right approach because it's planning for failure. Flexible scope means you have work to fill up what would go into your buffer that you can launch without if you need to, but would be very nice to have.<p>Create your high-level backlog and do MoSCoW prioritization. Figure out your "musts", "shoulds", "coulds", "wonts". Now apply some estimates to your features and add 10% for unforeseen growth. Estimate velocity based on team size and now you've got a date when you could conceivably hit your musts, shoulds, coulds. Set your "deadline" if you must somewhere deep in the coulds. If your musts go over, you are still able to launch an MVP. If things go well, you can start delivering non-musts.<p>Adjust your plan every sprint based on actual velocity.
I hate it when PMs ask me for an estimate of effort on a task I have never done before and I get this question all the time. I get asked to get a new process through a deployment system I've never worked with before and that's fine. How long will it take? That depends on how complex the deployment system is and I haven't worked with it yet.
There's a lot of snake oil for sale if you are looking to spend money on solving this problem; this probably is more of that.<p>The reality is that mostly we've gotten better at avoiding things that clearly don't work or are historically obviously misguided/inappropriately expensive (waterfall, CMM level 5, etc); and instead emphasizing things that don't work slightly better: managing risk by doing iterations, not attempting to plan the unplannable eventualities too far ahead of time, etc.<p>Some people refer to this as Agile. Other people as common sense. Either way, not wasting time on things that clearly don't add value tends to free people up to do something productive (duh). There's a pattern with agile with mostly non technical people higher up the management chain getting overly excited about things like estimates, velocity, burn charts, etc. I usually call stuff like that the illusion of progress and waterfall in disguise. Scrum particularly seems to have devolved to decorating offices with post its and employing busy looking people with moving those around and manually tracking them in convoluted tools like Jira.<p>But undeniably, we've gotten better over the past decades at building stuff with huge groups of people. Any idiot can probably gobble together some lines of code that does something vaguely useful. But committing to building stuff with hundreds or thousands of people is a different game. It requires lots of money and focus and there are quite a few companies that are doing this successfully.<p>A bigger pattern in our industry is that people seem to have shifted to calendar driven roadmaps for the most important bits of software where they ship whatever is ready on a fixed dates instead of committing to a long list of stuff shipping whenever it is ready. E.g. Apple ships OS versions once a year, Mozilla, Linux, Chrome, etc. ship every few months, typically with massive amounts of code changes.
I really wish people would put more effort into making sure people don't waste time instead of endlessly trying to make development more predictable. In my company there are a ton of inefficient processes and other things limiting productivity (noise, teams spread out over entire building, developers having to do work that would better be done by qualified tech writers, lack of adequate onboarding, lack of decision making constant changes). Instead of addressing theae management keeps on doing status meetings and planning.<p>I think they would be much better off if they made sure that their people have optimal productivity and then see how quickly things can be done. I guess that is exactly what scrum originally tried to address...
When I first started working as a web developer, my boss would ask me how long it would take to finish a project. I would give him an estimate time, and he would always double it. As it turns out he was always right. But because of it, I learned to provide better time estimates. Overtime I became more accurate and started to provide more realistic time frames. Depending where you work or who you work with there's a lot more that goes into your day to day tasks than just coding.
The fact that there are so many reasons for the failure itself tell why the failures are so frequent. I strongly believe that it should be more than just the development teams which should be attributed for failures. Projects rarely get delayed or fail cause of developers only, the more responsible parties are management and the company culture. Even the strongest of the developers would learn over period of time that sticking to a more realistic schedule would not earn them praises. Unfortunately the path to promotion is to keep the bosses happy in most places. Most people do not have motivation about the end results, its more about looking good on day to day basis.
Keeping the failures aside what has worked for me in the past is to add little padding (10%-20%) to all the tasks which no one would question and then we have enough padding to cover for any task in which the team really spent around 1000% percent more time than estimated. Again it really depends on how much the product people understand the efforts involved in development. Its hard to make some one believe that one line change took 3-4 days and another 1000 lines were added in half a day if they have not been there themselves.
I think neither solution is the correct one.<p>I find that in the case of the second solution: "thorough planning" you get a bunch of people fabricating estimates of estimates with padding or not padding or trying to estimate a bunch of unforeseen events that are too far ahead to get an accurate handle on.
Sure, the second is likely to be much more accurate just because the second is likely to be much, much longer but I have never given one of these estimates without it being followed up by immense disappointment by business with a desire to strong arm me into something shorter. This demonstrates a fundamental issue with the topic. Its not about the estimate at all.<p>I think the problem with estimates is their finality and assumption of correctness. I figure once you're in the developer-years category you might as well iterate the estimate a bit to get a better sense of it.
Error bars should be translated into risk for the business decision.<p>Too often I see people claim to be making "rational, facts based" decisions on estimates (beyond 1 year+) are complete codshit. This is not rational decision making. These decisions should be about risk management assuming failure as opposed to thinking you can slot year+ development estimates together.<p>I think very often the desire to lock down development estimates into "rational fact" are business decisions of risk masquerading as technical developer decisions of fact. I have yet to see a situation where we deliver an estimate that blows a business decision out of the water where the business just backs down. It just learns to ask a different question and gets the answer it wants out of that.
I thought this was a decent article, with decent advice if you're in a situation where you have to make estimates under conditions of uncertainty.<p>An example I've tried to use to explain software deadlines is the college term paper vs the mathematical proof. I can commit to writing a college term paper by a set date. I may do a good job or a bad job, but I know I can produce 20 pages on a topic with citations and references. It's really just a matter of will and follow through.<p>Many people experience deadlines this way, which is why they get a bit outraged when software developers fail to hit deadlines or warn properly that they won't. They don't understand that software can be more like a mathematical proof. You can tackle it, try things, but you only might crack it. You might be no closer than when you started. You might be moments away from cracking it and not knowing it.<p>My career advice to people is to seek out situations where software development deadlines have more in common with term papers than mathematical proof. These jobs do exist, though they are elusive. It's just another desirable aspect of a job, like better pay, nicer working conditions, telecommuting. There are jobs that define the goals of a software project more vaguely, to the point where you can deliver something great, or something merely ok, but there's really no chance you can't deliver something at all.<p>Some jobs go extinct because they are just so unpleasant that the people with the talent to work them simply find other options for employment. I personally find the stress or working under strict deadlines with very little certainty unpleasant enough that I'll accept lower pay or other tradeoffs to avoid them.
I can name exactly two reasons:<p>First is they commit to unclear expectations and requirements and no one has the backbone to call it out. This is a leadership failure or a top-down problem, when a division is lead by weak management with poor communication skills. The fix for an organization is to hold the top accountable first for the results, which rarely happens.<p>The second is a bottom-up problem: where a team has convinced themselves the only way they can solve a problem is to use this one framework that they haven't used yet. In music, there's a mental block students get where they look at the student violin or guitar and are convinced the reason they don't 'sound good' is because they're using a cheap instrument. This is categorically false: they need to put years of practice into the instrument. In much the same way, developers are obsessed with the framework they haven't used yet. They should be obsessed with discerning the business requirements. If we went to the moon on slide rules, the stack you already have is likely more than sufficient to implement whatever challenge is front of you.
As a long time engineer, and occasional pm. I steer teams to think about the worst case, and then triple that time estimate.<p>I know this sounds like setting up a team for failure, but I’ve seen it work again and again. It sets clear expectations for quality and delivery upwards and downwards which everyone can agree on.<p>Once this is done, the easier part is keeping everyone focused, and using all the leftover time well to raise quality.
I kind of like the XP approach to this.<p>First, estimates longer than two (or was it three?) weeks need to be split into smaller pieces. (Because it seems that when the estimates exceed two weeks, the accuracy of the estimates goes down. We're just not good at estimating things longer than that.)<p>Second, if you don't know enough to make the estimate - if the task is something that you don't know how to do - the first task is to find out. In XP, this is called a "spike" - a task where the purpose is to nail something down, rather than to produce a usable artifact. Often you don't know how to do something, but you can say that after two weeks of research, you'd have a better idea. So take the two weeks of research, and then you can give a decent estimate. (Hopefully - there are some tasks that you'll know you're done when you're done.)
As someone that does this every day across all industries and project types, I agree 100%, but proper/effective planning is not a panacea for project success.<p>Identifying and managing Risk is as equally as important. As is Interface (the people to people / team kind) management and correct Quality Assurance.<p>On top of all of that you need good Project Governance to handle change properly, with clear limits and bounds for changing durations of activities, budgets, scope and acceptable quality, defining responsibility and accountability, delegating authority, setting regular reporting requirements and cadence.<p>Managing (non-trivial) Projects well is difficult, it takes hard work and careful thought. That is why we struggle. We revert to System 1 for all aspects of project management, not just planning.
If you __really__ don't get to set the deadline, is it still reasonable to held accountable for that miss? Even if you give worst case and best case the only one anyone else focuses on it best case.<p>Personally, I prefer the 80/20 rule. That is most of the work will go fair quickly, or at least uneventful. It's the last bit that's always the killer. The devil is in the details, as they say.<p>When asked for an estimate we quickly identify the 80. The key is to pause and not write off the remaining 20.<p>p.s. Early in my career I read somewhere (I wish I couple remember where) something along the lines of:<p>Alomst everything takes twice as long and costs twice as much as your original estimate.<p>If I had $20 for every time I found that to he true, I wouldn't have time for HN. I'd be too busy counting my money.
Utterly awful article.<p>If I know that I've gone and gotten the milk a dozen times or more recently, I have a pretty good idea of the minimum and maximum amount of time it'll take me to go get that milk. It's 4km, about a seven minute drive, and the quickest I've done it is 20 minutes round trip, and the longest about 40 minutes (probably browsed the store a bit for other stuff), then I can give a fairly confident range quite quickly without having to over-plan.<p>It's after you've gotten the milk that the client remembers they're lactose intolerant and can you please go get some almond milk. That's when you get over-time over-budget. Although in hindsight, perhaps I should have asked what sort of milk they prefer.
Complexity is not as democratizable as the industry likes to believe.<p>- an org-chart team of 100 will perform worse than a tightly knit team of 10<p>- a tightly knit team of 10 will perform worse if tangible results are expected on regular intervals of the choosing of managers (e.g., weekly, monthly, take your pick) as opposed to treating the software project as a computer science research project spanning a year or more.<p>- a tightly knit team of 10 working on a software research project spanning a year or more will perform worse if expected to succeed in their first attempt as opposed to allowing them to fail one or more times and change directions, maybe even starting from scratch every time.
> This is where System 2 comes in—if we performed a more thorough analysis, these factors would have been considered in our answer. Then it would be clear that it’s much more likely to take 20 or 30 minutes to run to the store instead of 10.<p>I have encountered an article (can't find the link now, sadly) that claimed that when developers gave estimations, breaking down tasks to sub-tasks actually had a <i></i>reverse<i></i> correlation to their accuracy. In other words, their first gut reactions were actually better than estimations given after going in-depth through all the details and sub-tasks.
One statistic that has stuck with me is that 80% of software projects fail. It's the default that your software will not succeed, or do well.<p>One small reason for that is poor time estimation, but having good time estimation won't make your software succeed. I think having good planning, makes for a good working environment, but it doesn't mean the project as a whole will succeed. It might mean the developers will be happier to be with you and pivot to the next idea.<p>I would like to look at the percentage of successful projects and see what proportion of these had good estimation attributes.
The simple fact is that individuals or companies following System 1 have a much greater chance of survival in the competition. The consulting individuals or companies will simply find it hard to survive without System 1 - unless they already had a lot of capacity and capability built up. Unfortunately many get comfortable using System 1<p>System 2 works for Products (includes SaaS) and chances are that it mostly works for Products that have been slightly successful in the first place. This is where you also find consulting companies that may have obnoxious rates.
I thought it was because people seem to like changing their product plans midway through development. Seen an awful lot of scope and feature creep in project management, usually either because of a 'client' who keeps wanting more or design by committee.<p>Though I guess that often ties into project management and planning failures too. Seen a few projects where no one seemingly asked what the client/customer/business needed or how they actually worked, which then meant a ton of refactoring further down the line.
I think there are few main reasons why this happens. I understand this will not appeal to many developers and understandably so. Learning new frameworks and having freedom of choosing your technology is definitely nice when you are developer, but you also need to recognize that if you are changing your stack every year you can be experienced developer and yet are effectively beginner at your new stack.<p>1. Managers (and developers, too) don't strive for repeatability and predictability of the process. Sticking to frameworks, reducing variability, forcing an exact development pipeline. It is not appealing to developers and managers are afraid to push this.<p>2. Lack of feedback loop. At the end of every project/deliverable/iteration, ask what could have been done to prevent the problem (what could have helped to estimate this more reliably). Implement mercilessly as if bad estimate was on par with deployment failure.<p>3. Move unplanned work to planned work. Prioritize delivering sound and good 100% of code over delivering quickly "the first" 80% of your solution. Develop code as if it was controlling Solid Rocket Boosters. Follow good practices like MISRA. Don't allow exceptions. Do PROPER code reviews. Most code reviews I have seen is a colleague spending minimum possible time so that he does not feel he did bad job. In my opinion good code review requires going throug evereything top down from requirements and then bottom up through each statement to understand everything is implemented correctly. This takes about as much time as implementing it in the first place. Make sure managers understand this typically takes more time to do correctly and they should expect payout later, not this iteration.<p>4. Hire carpenters and make them your senior staff. Make sure you have clear understanding who is senior/junior developer. Senior developer is a person that understands broader context and can be left to supervise a small project with understanding he/she will be able to uphold standards and provide correct solution and guidance for the junior staff. Make sure your senior developers are carpenters -- most projects don't require exceptional skills and rock star developers. They require people who don't get bored once they see something working but instead have the drive to finish the second 80% of your functionality and do it with the same amount of focus as when they have started.
"Delivery dates have often irrelevant but very simple to understand impacts. Good and bad solutions have dramatic but very difficult to understand impacts."<p><a href="https://minnenratta.wordpress.com/2017/01/25/things-i-have-learnt-as-the-software-engineering-lead-of-a-multinational/" rel="nofollow">https://minnenratta.wordpress.com/2017/01/25/things-i-have-l...</a>
One missing element is the failure to allow teams or their members to stay on the project. It is very common to see people pulled off to must haves, support needs, and even many projects totally skip out on accounting for staff vacations which can be costly with long term employees who have four to six weeks out.<p>It is so easy to come across the frustrated developer, frustrated that they just cannot be allowed to do the work.
Conway's Law: "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."<p>And most organization -- or collections of three or more people -- have dysfunctions. There are rarely process problems where technology is the main problem; it's the wet-ware between the keyboards and the chairs that make or break projects.
When I was just starting out as a wet-behind-the-ears developer, an older, grizzled dev gave me invaluable advice for estimating: take your best guess of how long something will take and double it, that's what you tell the users. That advice has stood me in good stead, though I've found as I've gotten older and more experienced, I could probably reduce the factor from 2 to about 1.5.
Yes, plan, the more planning the better. But set the deadline <i>first</i>, and plan and design around meeting the deadline. Ask yourself, "I have two weeks to deliver this, but if I had to deliver <i>something</i> tomorrow afternoon, what would I do?" -- and do that first.
For me, not designing the product as a platform is no. 1 root cause of all the delays and confusions. People then to deliver for speed, make compromises. Over a period we end up in mud pile and slows down all future developments. This makes the speed slower & slower with age.
The eternal conversation, apparently.<p>I agree with making fine-grained plans as a way to uncover issues. Just remember, no plan survives contact with the enemy. On the other hand, fortune favors the prepared. (I actually think those are both Eisenhower quotes, aren't they?)
And then you make a good plan with buffers and it fails too.<p>Cause you forgot so many things that cost time: you'll catch a cold, the warm weather distracts, ignored the existance of family & friends, computer breaks down, ...
OT: I like the "thinking systems" infographic but the fact that the green areas are few pixels off is annoying me to no end. I hope it's not a new trend.
Let's try another comparison: that to a skilled surgeon. 4+4 years to get GP status, then at least 4 years more to become surgeon; another 2-10 to lead in an operating theatre.<p>Diseases vary just like reasons for misbehaviour in code. While coding also has a bit of construction, that surgery doesn't, a lot of coding time is spent on extending or fixing existing behaviour. As such, the simile is better than the bridge-construction one.<p>It would also allow discussing leadership; a strong lead handles the 1-12 hours the surgery takes. He/She's expected to know the human body as a system, by being able to diagnose adverse conditions that occur during the operation, instructing the people around her as she goes.<p>Operations can't take too long, or fatigue gets you; the corollary being that you don't have slippage like you do in s/w dev. You can't be too unskilled, or else you can't perform the work. You have to have lots of training before you're the one leading.<p>Contrasting education; a lot of physicians go through much rote memorisation when they start their education. Then they continue with laborations; this could be useful for software engineers and operations folk, by letting them try their hand at diagnosing production systems having problems. Such training could be done by a 20-questions approach, with each question being answered a metric of a category of log entries. In the end you should know what the problem was. Labs at uni are much more constrained; they don't teach how large production systems behave and don't teach mental tools to debug them, they only teach the basics of programming. Furthermore, what they teach of programming is never geared towards what real systems look like, because most teachers have never been close to one.<p>There's no-place in the world that you can be education like what a surgeon would get; comp-sci is more like training everyone to be a psychologist (because we want to get the full picture!). Apprenticeship programs are more like training to be a nurse.<p>- The formal education people receive is not applied towards bettering industry performance, it's geared towards inventing and academia
- Shorter education people receive is not about understanding the system and how to construct and fix it; it's about working next to it
- Similar hard rote-memorisation tests could be coupled with in depth debugging/operations sessions and experienced teachers active in large production systems, like understudies preparing to be expert surgeons
- In the light of this; we need a career ladder that doesn't end with the same title you start with "engineer"/"senior engineer". One that is structured.