I don't think there's any debate that software is about as complex as anything on the planet; since the dawn of the computer in the 1940s people as diverse as Grace Murray Hopper and Charles Simonyi have tried to simplify programming these necessary but obstreperous devices.
Partly because of the inherent complexity of software, and partly because of a host of other factors, estimating the time and cost to develop a piece of code is, well, tough. It's so tough that anyone who has been in this industry for more than a week has a story about the Project From Hell. You know the one: the requirements keep changing, dates keep slipping, milestones are missed, fixing bugs is a game of Whack-a-Mole--the whole thing becomes a kind of death march.
One of the really hard parts about cost estimating is that requirements for new code are hard to define precisely in advance. And unless the project is very similar to one this team has done already, the implementation effort for each given requirement probably requires a lot of guessing. I've known people who plugged developer-days into Excel spreadsheet lists of features, but we all knew they were pulling those estimates out of their, um, hat. They did it because management said "we gotta have a schedule" and the schedule requires a work breakdown structure and that requires someone to put resources estimates next to tasks. All very lovely and it works really well for building a house where you can accumulate both industry- and company-wide data that says it takes XX hours per square foot to frame a certain kind of house. And where you know that two framers can work 2.5 times faster than one framer but 10 framers just get in each other's way.
Software people like to accumulate data, too, but two problems arise:
Looking at the second question first, writing one application isn't like writing another application. So unlike lots of industries where multiple projects have a great deal of commonality, writing software typically doesn't. To use an example about which I know absolutely nothing (although that hasn't stopped me in the past) it's probably like film-making. Every movie goes through defined stages: scriptwriting, casting, production planning, set design, filming, editing, music, yadda yadda yadda. But knowing that process tells you very little about the cost of The Breakfast Club compared to Titanic.
Realistically every humongous project with tons of moving parts is hard to estimate and keep on track. Years ago I toured Hoover Dam and learned that using clipboards and sliderules they came in on schedule and under budget but they also had a lot of funerals in the process. More common is a project like Boston's Big Dig where everything seemed to go wrong. Software is no different.
KLOCs are still used for various measurements (we use them even though we recognize it's a flawed system) and more recently function points have had their day in the sun. Several people have connected the two by dividing the number of lines of code in an application by the number of function points and not surprisingly the higher-level the programming language the smaller the ratio. Assembly language (not macro assembler) can have over 300 LOC per FP while C# is more like 54. Excel is probably like 1.
After coming up with a way to measure software then it's natural that people calculate programmer productivity as a way to baseline estimates for future work. This in itself creates a whole slew of potential errors and mis-calculations some of which are addressed in the COCOMO model as well as the far more complex COSYSMO method. You can read about more different ways to estimate software projects than you can imagine here.
One interesting aspect of all this investigation is that a body of data is available, although it's not exactly apples to apples maybe it's more like bananas to plantains. Or plums to peaches, I don't know. But using reasonably large data sets it's possible to deduce some numbers about how many lines of code or function points a developer can write per day, how many bugs per KLOC or FP will get created, how long it takes to find and fix each bug, even how many will slip through detection and wind up in the delivered product.
Some random data points:
When you're thinking about application modernization a couple of points jump out from all this.
First, who's going to be working on the project? Is it your best or worst developers? I know that dev managers tell me their top people don't want to work on legacy code; they want to do the new, cool stuff. So it falls to the junior people or maybe the old timers who know the legacy language and application. This obviously goes to net productivity.
When considering a modernization project, if you can migrate existing code (ie reuse) you will have a far easier time of it than if you decide you have to rewrite it. There are many good reasons to throw out the old code and start fresh but recognize that it's an expensive way to migrate code.