Dec 072003
 

This place has gone to seed, in large part, because I’ve been doing some actual work, trying to get a software release out — late, inadequate, but out — and as a consequence have followed Floyd McWilliams’s and Evan Kirchhoff’s theorizing about the future of software with more than academic interest. Evan starts here, Floyd replies, more Evan, more Floyd, and finally Evan again. The question at hand is when all of our jobs shall be outsourced to Softwaristan (India), where they produce high-quality source code for pennies a day, and what we software developers shall be doing for a living when that happens. As Evan puts it, “Floyd says ‘decades,’ I say ‘Thursday.'”

And I say, with due respect to both of these highly intelligent gentlemen, that neither one has the faintest idea what he’s talking about. They are speculating on the state of a science seventeen years in the future, and if they were any good at it they wouldn’t be laboring, like me, in the software mines, but in the far more lucrative business of fortune-telling. I — and I suspect I speak for Floyd and Evan here too — would happily swap W-2s, sight unseen, with Faith Popcorn or John Naisbitt, and they’re always wrong.

Floyd compares the current state of software development to chemistry circa 1700, which is generous; I would choose medicine circa Paracelsus, the Great Age of the Leeches. The two major theoretical innovations in modern software are design patterns and object orientation. Design patterns and object orientation are, depending on how you count, ten and thirty years old respectively, which indicates the blazing pace of innovation in the industry. Design patterns mean that certain problems recur over and over again, and instead of solving them the old-fashioned way, from scratch every time, you write down a recipe, to which you refer next time the problem crops up. Object orientation means that software modules, instead of just encapsulating behavior (“procedural programming”), now encapsulate data and behavior, just like real life! Now doesn’t that just bowl you right over?

Hardware, by contrast, improves so rapidly that there’s a law about it. It is a source of constant reproach to software, which has no laws, only rueful aphorisms: “Adding people to a late software project makes it later,” “right, fast, cheap: choose two,” and the like.

Evan claims, notwithstanding, that “a working American programmer in 2020 will be producing something equivalent to the output of between 10 and 1000 current programmers.” Could be. He points to analogies from other formerly infant industries, like telephones and automobiles. He also cites Paul Graham’s famous manifesto on succinctness as power, without noting that Graham’s language of choice is LISP. LISP is forty years old. If we haven’t got round to powerful languages in the last four decades are we really going to get round to them in the next two?

Floyd counters with an example of an object-relational library that increased his team’s productivity 25-50%, arguing that “as long as development tools are created in the same order of magnitude of effort as is spent using them, they will never cause a 100 or 1000-fold productivity improvement.” Could be. Certainly if, as we baseball geeks say, past performance is the best indicator of future performance, I wouldn’t hold my breath for orders-of-magnitude productivity improvements. On the other hand, bad as software is, enormous sums are poured into it, large segments of the economy depend on it, and the regulators do not even pretend to understand it. This all bodes well for 2020.

Me, I don’t know either, which is the point. Evan works on games, which are as good as software gets; this makes him chipper. Floyd works on enterprise software, which is disgusting; this makes him dolorous. I work on commercial business software, which is in-between; this makes me ambivalent. We all gaze at the future and see only the mote in our own eye.

(Update: Rick Coencas comments. Craig Henry comments.)

  13 Responses to “2020 Foresight”

  1. Aaron –

    You’re aware that leeches are making a comeback in the medical community right?

    Here’s a link to paste in with the latest.

    http://www.kudzumonthly.com/kudzu/oct03/Leeches.html

  2. Somewhere in "Methods of Logic" Quine makes precisely Paul Graham’s point. If I can recall, Quine points out that certain logical operations could be disregarded–if then conditionals I believe– and replaced with longer conjunctive and disjunctive expressions. This has the advantage of lowering the number of operators a logic uses, but the disadvantage of increasing the complexity of statements. We live with a greater number of operators in order to lower the level of our everyday toil.

    But the question I have is this: Doesn’t the increase in processor speed also speed the development of software? Isn’t there any aspect of software development that is aided by hardware advances?

  3. But the question I have is this: Doesn’t the increase in processor speed also speed the development of software? Isn’t there any aspect of software development that is aided by hardware advances?

    Sure. The higher the processor speed, the more it permits the use of high(er)-level programming languages.

    Back in the beginning PC days (1981), better than merely acceptable application performance dictated programming in so-called assembly language, which is only one small step up from programming at the "bare metal"level. IOW, really, really, slow development, and a huge margin for error. Increases in processor speed permitted first, C, then even higher level languages as processor speed increased.

    I suppose there’s an upper limit of benefit in that direction, but I imagine it’s not been reached yet (as I’ve been out of the loop for many years, I can’t speak authoritatively to that point).

    ACD

  4. The move from assembly-level programming to high level languages did result in a huge increase in programmer productivity, but there are good reasons to believe that going to still higher level languages won’t produce similar gains. Fred Brooks, in his essay No Silver Bullet lays out a compelling argument for why this is so. Briefly, the argument is that as you continue to reduce the burden of realizing the programmer’s ideas in code, at some point the effort becomes insignificant next to the effort of coming up with a good design in the first place. At that point further gains from advanced programming languages have to become insignificant because they are attacking a minor part of the problem.

    I think we see precisely this phenomenon in Floyd’s arguments. At one point he mentions that the problem with the software imported from India wasn’t that it was bad code, but that it just didn’t integrate well with the stuff they already had. Using a higher level language won’t address problems of that sort.

    Software is conceptually complex, possibly more so than any other human construct. Our initial ideas on how a piece of software will fit together and how it will work are invariably defective, and this is currently the most important limitation on programmer productivity.

    -rpl

  5. Just for the record

    I wrote:

    Back in the beginning PC days (1981), better than merely acceptable application performance dictated programming in so-called assembly language, which is only one small step up from programming at the "bare metal"level.

    That should have read:

    "Back in the beginning PC days (1981), better than merely acceptable application performance dictated programming in so-called assembly language, which language was absolutely necessary for all system-level programming, and which is only one small step up from programming at the ‘bare metal’level."

    ACD

  6. I tend to fall on the "Software is always going to be hard" side of things (and as long as we’re comparing our street creds, I’ll mention that I work on JPL’s Deep Space Network ground system).

    Software design is all about managing complexity. If you give us better or higher-level tools, we’ll simply try to manage even more complexity until once again we’re running at the ragged edge of disaster. There’s no end to it.

  7. Software design is all about managing complexity. If you give us better or higher-level tools, we’ll simply try to manage even more complexity until once again we’re running at the ragged edge of disaster. There’s no end to it.

    Y’know, I’ve heard that song an innumerable number of times, and I never really understood it. It seems to me that the single rule to follow to avoid that trap completely is to conceptualize a problem at any level of complexity in terms of its simplest elements; build the program modularly with rigidly enforced single entry and exit points to meet the needs of each element, and so "grow" the full program.

    Clearly, there’s something wrong with my reasoning as the song above referenced is still being sung. But for the life of me, I can’t figure out why.

    ACD

  8. AC, I don’t think there’s anything wrong with your reasoning. It’s just far easier said than done. A friend of mine recently wrote a library for dealing with financial instruments of arbitrary complexity. Now, if we follow your advice, what’s the "simplest element" we’re dealing with here? Originally he conceived it as a time-series of payments, but then he realized that the fact that all goods can be either lent or sold was more fundamental. You wind up with an entirely different library, depending where you begin, and where you should begin is far from obvious.

    What makes programming difficult is not its mechanics, which are straightforward, but the fact that all programs are models of some aspect of reality, an aspect that the programmer, whose specialty is programming, not the domain, imperfectly understands, and that the domain experts themselves, who are unaccustomed to thinking about what they do in fundamental terms, are ill-equipped to explain.

  9. …an aspect that the programmer, whose specialty is programming, not the domain, imperfectly understands…

    I take your point, Aaron, and know you’re right. However, it seems to me that a programmer who doesn’t fully understand the domain ought not be permitted to go anywhere near a coding screen until he does understand, or at the very least has, at all times, someone by his elbow who does. Having a program written by a programmer who has no real understanding of the domain with which the program deals is, on its face, an absurdity that ought not be countenanced.

    ACD

  10. AC: There are two problems with this. The first is that the very people who best understand the domain can least be spared to baby-sit the programmers. The second is that the greater part of domain knowledge is tacit. Even people who understand their business extremely well are unused to translating their knowledge into programming terms, and most programmers aren’t too great at teasing it out either. (The ones who are make a lot of money.) This is why the fashionable models of software development, like Extreme Programming, emphasize frequent iterations: putting something, anything, that works in front of the client as often as possible and having him play with it and tell you what’s wrong before you go to far off the rails.

  11. Aaron makes a good point about iterative software development. It is often through a blackbox QE process and a good Beta program that most of the workflow issues get hammered out, not in the development phase.

  12. Right you are again, Aaron.

    But the core problem remains, and it’s, if not lethal, cumulatively disabling. It seems to me that a rational model (and unlike in poetry, in programming the rational is God) would be for the team manager (i.e., the one responsible for the overall architecture of the program) to either himself have a full understanding of the domain, tacit elements included, or the company hire an expert for the duration of the program’s development who in tandem with the team manager — the former supplying the domain goods, and the latter responsible for translating them into conceptual programming paradigms — get the domain down pat.

    An ideal, most certainly, but there seems to be no way out short of producing a program that’s at its core a sham.

    ACD

  13. The problem is analogous to creating productivity software for artists.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)