We Are GAP Mobilize
Free Assessment Tool

Estimating application modernization project costs

by John Browne, on Oct 8, 2013 11:46:00 AM

I don't think there's any debate that software is about as complex as anything on the planet; since the dawn of the computer in the 1940s people as diverse as Grace Murray Hopper and Charles Simonyi have tried to simplify programming these necessary but obstreperous devices.

Partly because of the inherent complexity of software, and partly because of a host of other factors, estimating the time and cost to develop a piece of code is, well, tough. It's so tough that anyone who has been in this industry for more than a week has a story about the Project From Hell. You know the one: the requirements keep changing, dates keep slipping, milestones are missed, fixing bugs is a game of Whack-a-Mole--the whole thing becomes a kind of death march.

It's October which means Halloween so I get to use a zombi picture.Part of the uncertainity is people: some pundits claim that a great developer can be 20x as productive as an "average" one. Other people say, nah, the difference between the best and worst on a given team is probably more like 2x not 10 or 20x. The 20x is more like comparing the senior rocket scientist at a Google or Facebook to Joe Lunchbucket down at the local insurance company (not to pick on insurance companies--substitute your own "not the hottest Silicon Valley startup" replacement here). But we've all known people who could just crank it and we've all known schlubs. 

One of the really hard parts about cost estimating is that requirements for new code are hard to define precisely in advance. And unless the project is very similar to one this team has done already, the implementation effort for each given requirement probably requires a lot of guessing. I've known people who plugged developer-days into Excel spreadsheet lists of features, but we all knew they were pulling those estimates out of their, um, hat. They did it because management said "we gotta have a schedule" and the schedule requires a work breakdown structure and that requires someone to put resources estimates next to tasks. All very lovely and it works really well for building a house where you can accumulate both industry- and company-wide data that says it takes XX hours per square foot to frame a certain kind of house. And where you know that two framers can work 2.5 times faster than one framer but 10 framers just get in each other's way.

Software people like to accumulate data, too, but two problems arise: 

  1. What are you measuring? 
  2. What were you doing?

Looking at the second question first, writing one application isn't like writing another application. So unlike lots of industries where multiple projects have a great deal of commonality, writing software typically doesn't. To use an example about which I know absolutely nothing (although that hasn't stopped me in the past) it's probably like film-making. Every movie goes through defined stages: scriptwriting, casting, production planning, set design, filming, editing, music, yadda yadda yadda. But knowing that process tells you very little about the cost of The Breakfast Club compared to Titanic.

Realistically every humongous project with tons of moving parts is hard to estimate and keep on track. Years ago I toured Hoover Dam and learned that using clipboards and sliderules they came in on schedule and under budget but they also had a lot of funerals in the process. More common is a project like Boston's Big Dig where everything seemed to go wrong. Software is no different.

Courtesy flickr user SnowmanradioThe question of measurement is one that goes way back in time as people first began applying stop watches to programmers to see if they could figure out who was fast and who was slow and how come. IBM famously measured programmer productivity in KLOCs (thousand lines of code) where engineers who cranked out tons of code were rewarded over those who wrote fewer KLOCs. This led in turn to the phenomenom of IBM creating pieces of OS2 to hand off to Microsoft who then rewrote it to make it smaller; the IBM programmers having an incentive to bloat everything up and the Microsoft programmers needing to fit it all into a limited memory footprint. At Microsoft at that time (1980s) believe it or not the most celebrated developers were those who could make it small, not big. 

KLOCs are still used for various measurements (we use them even though we recognize it's a flawed system) and more recently function points have had their day in the sun. Several people have connected the two by dividing the number of lines of code in an application by the number of function points and not surprisingly the higher-level the programming language the smaller the ratio. Assembly language (not macro assembler) can have over 300 LOC per FP while C# is more like 54. Excel is probably like 1.

After coming up with a way to measure software then it's natural that people calculate programmer productivity as a way to baseline estimates for future work. This in itself creates a whole slew of potential errors and mis-calculations some of which are addressed in the COCOMO model as well as the far more complex COSYSMO method. You can read about more different ways to estimate software projects than you can imagine here

One interesting aspect of all this investigation is that a body of data is available, although it's not exactly apples to apples maybe it's more like bananas to plantains. Or plums to peaches, I don't know. But using reasonably large data sets it's possible to deduce some numbers about how many lines of code or function points a developer can write per day, how many bugs per KLOC or FP will get created, how long it takes to find and fix each bug, even how many will slip through detection and wind up in the delivered product. 

Some random data points: 

  • IBM for many years used a rule of thumb that a developer could on average produce 10 LOC/day. Microsoft's Excel team in the 90s (arguably one of the best development teams ever) was able to do about 50 LOC/day. Note these include everything from project inception to release, not just coding.
  • Line-of-business applications of around 1000 function points (FPs) will average 4.5 defects (bugs) per FP. The delivered software will have about 1.2 defects per FP.
  • Looking at all development by methodology (again around 1000 FPs) waterfall model will generate about 7 defects per FP with a removal efficiency of 75%; Agile will reduce that to 5.5 defects per FP with removal efficiency of 87%. 
  • 85% reused code can drop the defect ratio to 2.25 per FP and removal efficiency jumps to 97%; delivered defects at a scant 0.09 per FP.

When you're thinking about application modernization a couple of points jump out from all this.

First, who's going to be working on the project? Is it your best or worst developers? I know that dev managers tell me their top people don't want to work on legacy code; they want to do the new, cool stuff. So it falls to the junior people or maybe the old timers who know the legacy language and application. This obviously goes to net productivity. 

When considering a modernization project, if you can migrate existing code (ie reuse) you will have a far easier time of it than if you decide you have to rewrite it. There are many good reasons to throw out the old code and start fresh but recognize that it's an expensive way to migrate code. 

Topics:software developmentapplication modernizationMicrosoft

Comments

Subscribe to Mobilize.Net Blog

More...

More...
Download Free Trial