We can’t afford to write safe software

Let’s start by thinking about a very simple example.  I’ve recently switched to using Ruby as my language of choice, after a decade as a Perl hacker.  Ruby does a lot of things more nicely than Perl, including having proper object syntax, simple everything-is-an-object semantics, a sweet module/mixin scheme and very easy-to-use closures, so I’ve mostly been very happy with the switch.  In so far as Ruby is a better Perl, I don’t see myself ever writing a new program in Perl again, except where commercial considerations demand it.

But there are some things that I miss from Perl, and one of them is the ability to say (via use strict) that all variables must be declared before they’re used.  When this pragma is in effect (i.e. pretty much always in serious Perl programs), you can’t accidentally refer to the wrong variable — for example, if you use $colour when you meant $color (because you’re a Brit working on a program written in America), the Perl compiler will pull you up short and say:

Global symbol “$colour” requires explicit package name at x.pl line 4.
Execution of x.pl aborted due to compilation errors.

In Ruby, there is no need to declare variables, and indeed no way to declare them even if you want to (i.e. nothing equivalent to Perl’s my $variableName).  You just go ahead and use whatever variables you need, and they are yanked into existence, bowl-of-petunias-like, as needed.

Which means that Ruby programs suffer a lot from $color-vs-$colour bugs.  Right?

Not so much, as it turns out.

Problems that are not really problems?

Ruby’s been my language of choice for about four months now — which I admit isn’t very long, but it’s long enough that statistics over that period are not completely meaningless.  So far, I’m not aware that I’ve run into a $color-vs-$colour problem even once.  Doesn’t mean it’s not happened, of course — maybe it did, the problem was obvious when I saw the program run wrongly, and I fixed it on autopilot without really registering it.

At any rate, if it’s happening at all, it’s not a big deal for me in practice.  Which makes me think: is it ever?  Could it be that variable-misspellings are a category of bug that we naturally tend to guard against rigorously but that we we don’t actually need to worry about?

If this is true, then it’s come as a surprise to me.   When I was switching to Ruby, I sent emails to my Ruby-maven friends whining about the lack of variable declaration and foreseeing all kinds of doom arising from it.  I’m as surprised as anyone that it’s not happened.

In the mean time, I have saved myself the trouble of typing my $variable (or, since it’s Ruby and we don’t need line-noise characters in variable names, my variable) some hundreds or maybe even thousands of times.  Now, typing that is not a particularly heavy burden.  If it takes, say, three seconds to type each time, and I’ve omitted six hundred of them in the last few months, that’s 1800 seconds which is half an hour.  Long enough to watch an episode of Fawlty Towers, but not long enough to dramatically change my programing lifestyle.

Particularly heavy burdens

But now think about the difference between typing

def qsort(a)


public static void ArrayList<Integer> qsort(ArrayList<Integer> a)

In other words, explicitly stating your types.  We do this for basically the same reason that we declare variables before use: to guard against our own mistakes, and to get them reported to us as quickly and clearly as possible[1].  Now that is, unquestionably, a Good Thing[2].  I am big fan of what I call the FEFO principle: that programs should Fail Early, Fail Often.  When something goes wrong, I want to hear about it straight away: what can be worse that trying to debug a program that ignores an error condition until it pops up later, when all the evidence has gone?

But let’s also be honest that all that public static void main stuff does impose a real burden.  A much heavier one than the occasional sprinking of my $variable.  What we have here (and this shouldn’t really surprise anyone) is a case of getting a real benefit in return for a real cost.  Well!  Turns out that you can’t get something for nothing!  Who’d have thought?

(Now of course you have your dynamic-typing fundamentalists, who will argue that having type errors diagnosed up front is valueless; and likewise, you have your static-typing fundamentalists, who will argue that all the scaffolding in languages like Java imposes no cost.  Since both are clearly talking nonsense, and worse, are impervious to rational argument, let’s just treat them as we would treat that other wacky pair of opposed fundamentalists, “Doctor” Kent Hovind and Richard Dawkins, and ignore them.)

So the question is not “is static or dynamic typing better?” — advocates of both sides will be able to give cogent reasons in support of their position; and opponents of each side will be able to give cogent reasons why the position is wrong.  Stage Zero in understanding the problem is simply to recognise that there really are legitimate reasons to adopt either position.  The question is more along these lines: given that static typing imposes an overhead on programming, under what circumstances does that overhead pay for itself?

When does the cognitive tax imposed by static typing pay for itself?

And as soon as the question’s phrased in those terms, it gets a lot easier to think clearly about.  Because we realise that it all comes down the question of what it costs to have a bug.  If I’m writing an e-commerce web-site that sells books, a mistake may mean that I lose an order worth £20.  If I write software that runs life-support systems in hospitals, a mistake may mean that someone dies.  It seems obvious to me (although I admit that using the word “obvious” is always asking for trouble) that in the former case, it’s better for me to spend my time getting more work done — adding features and whatnot — and risk losing the odd sale to a bug that static typing might have found for me.  It also seems obvious that in the latter case I should use every tool available to ensure that the code is correct, and that the cognitive overhead of static typing is a small price to pay.  Somewhere in between those extremes lies the crossover point.  But where?  Those of you who work on banking software might have opinions on this: bugs can potentially have dramatic financial consequences but are unlikely to directly endanger life and limb.

Safety is expensive

Now here’s the thing: we all know from experience that as organisations grow, they invariably start to accumulate more and more rules, procedures, forms to fill in, and so on.  One day, when a company has 200 employees, someone climbs a stepladder to change a lightbulb, falls off the ladder, breaks his collar bone and has to take two weeks paid leave.  That costs the company, say, £3000, and someone high up thinks “This must Never Be Allowed To Happen Again!”  So before you know it, there’s a set of Stepladder Safety Procedures that have to be followed, and no-one is allowed to use a stepladder at all until they’ve taken the half-day Stepladder Safety Course.  Eventually every employee has taken the course (at a cost of 100 person-days, or £30,000) and as a result no-one falls off a stepladder again for a while.  (We’re charitably assuming that the course is successful and that the overall rate of stepladder accidents really does decrease as a result of everyone having taken the course.)

Now of course in objective financial terms, this doesn’t pay off until we reach the point where, without the course, ten people would have fallen off their stepladders on office time.  But in the meantime the company keeps growing and the hundreds of new employees are also taking this expensive and not-usually-useful course.

For similar reasons, mature companies will often have many, many other procedures that everyone has to follow.  Because someone once nicked £50 worth of stationery and It Must Never Be Allowed To Happen Again, all 200 employees spend fifteen minutes every week filing Stationery Requisition Forms (at a total cost of £2000 per week); and so it goes.  We all know someone who’s worked in one of these places where you can’t so much as sneeze without filling in a form in triplicate first.  Some of you are unfortunate enough to work for such companies.

In some circumstances — some, I say! — all that tedious mucking about with static types is like the Stepladder Safety Course that costs ten times more than it saves.

On the other hand, there are circumstances where safety is important.  Deeply important.  Where additional layers of procedures, forms, validations and suchlike pay for themselves over and over again.

So here’s my thinking: the older and larger an organisation gets, the more it tends to lean on formal procedures, form-filling and suchlike to buy safety — even if it’s at a disproportionally high price.  And because that’s the existing culture of such organisations, they are disproportionally likely to favour static typing, which fits into their standard SOP of investing extra effort up front to reduce the likelihood of accidents further down the line — however unlikely those accidents and however minor their consequences.  And I think this explains the near-universal tendency for big organisations to favour what I am going to suddenly start calling Stepladder-Safety Programming.

If I’m right, it explains a lot.  It explains why SSP-friendly languages like Java and C++ are so widely used (it’s because the relatively few organisations that use them are the large ones), but also why there’s always been such a strong guerilla movement that prefers dynamically typed languages such as Perl, Python and Ruby — and indeed the various dialects of Lisp, if you want to go back that far.  It also explains why there is such a strong bifurcation between these two camps: static typing is often favoured in environments where anything else is almost literally unthinkable whereas dynamic languages are often found in startups where Get It Working Quickly is the absolute sine qua non, and SSP wouldn’t even be on the radar.  That fact that static vs. dynamic typing is embedded in cultures rather than just programming languages makes it much harder for people to cross the line in either direction: it feels like a betrayal, not just a technical decision.

And this of course is complete nonsense.

At bottom, static vs. dynamic is a technical decision, and should be made on technical grounds.  Cultural predispositions in one direction or the other are, simply, an impediment to clear thinking.

And finally …

The great technical impediment that prevents us from choosing to adopt either static or dynamic typing on a project-by-project basis based on a mature and disinterested judgement of whether the additional cost is merited in light of the project’s safety issues

Here is my big gripe: the choice of static or dynamic typing is so often dictated by the programming language.  If you use Java, you are condemned to name your types for the rest of your days; if you use Ruby, you are condemned never to be able to talk about types — not even when you really want to, as for example when you want to describe the signature of a callback function.

That’s just stupid, isn’t it?

Surely it should be the programmer’s choice rather than the language’s?

There are a few bows in this direction in languages known to me.  One of them we’ve already mentioned: Perl’s optional use strict pragma imposes a few static-typing-like limitations on programs.  Most notably, it includes use strict ‘vars’, which requires each variable to be declared before use.  That’s pretty weak sauce, but at least a step in the right direction, which is more than Ruby offers.  Although the use of use strict is close to ubiquitous in the Perl world, there are exceptions: for example, I notice that the unit-test scaffolding script generated by h2xs does not use strict — presumably on the assumption that test-scripts are easy enough to get right that it’s nice to be allowed to write in the laxer style.

But, really, much much more is needed.  There’s no obvious technical reason why Ruby and similar languages shouldn’t be able to talk about the types of objects when the programmer wishes, and non-ambiguous syntax is not difficult to invent.  Conversely, would it be possible to relax Java and similar languages so that they don’t have to witter on about types all the time?  That might be a more difficult challenge — I don’t know enough about Java compilers or the JVM to comment intelligently — but it seems to be at least a goal worth aspiring to.

Does anyone know of any existing languages where static typing is optional?

In conclusion …

In choosing what typing strategy to use  in writing a given system (and therefore, given the current dumb state of the world, in choosing what language to use), we should give some thought to the costs involved in static typing and the risks involved in not using it — both the likelihood of error (which in my experience is often much smaller than we’ve got used to assuming) and the cost if and when such errors do occur.  My guess is that a lot of habitual Java/C#/C++ programmers are in the habit of doing Stepladder Safety Programing for essentially cultural rather than technical reasons, and that a large class of programs can be written more quickly and effectively using dynamic typing while admitting little additional possibility of error.

We do need static typing for life-support systems, avionics and Mars missions.

(Although come to think of it, the dynamically typed language par excellence Lisp was indeed used on Mars missions, to very good effect.  This was pointed out in a comment by Vijay Kiran on an earlier article: “Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.”  I’m not sure what to make of that observation.)


[1] Having your program talk about types can also enable optimisations that aren’t otherwise possible, but those benefits are not as great as people sometimes seem to imply; and since we all know that time spent in CPU is rarely the limiting factor in the kinds of disk- and network-bound programs that most of us work on most of the time, let’s ignore that for now.

[2] Well, it’s usually a good thing.  When a program talks about types too much, one of the real downsides is that it limits that program’s ability to work with objects of types that weren’t envisaged up front.  Then you end up having to invent horrible things like the C++ template system, and hilarity ensues.  But since we ignored the optimisation benefits of talking about types, let’s also ignore the flexibility benefits of not talking about them.

84 responses to “We can’t afford to write safe software

  1. Nice post, interesting discussion. Having experience with both Java and Ruby, I’d like to mention the one reason why I like static typing (which wasn’t mentioned in your post): the cool stuff IDE’s can do because of it.
    Static typing allows reliable autocompletion and near-instantateous automated refactorings.
    Sure, I’ve seen autocompletion for Ruby too, but I’ve never seen it work particularly well.
    And sure, you could use search&replace to rename a method, but it’s a hassle across multiple files and you might replace text in comments that you didn’t want replaced, etc.

  2. Lisp has optional type declarations, but I think they are intended as more of an optimization than as a safety feature.

    Erlang has optional type declarations as a safety feature via the dialyzer tool. I used it on one project and it helped me catch several small bugs, but it also required a significant investment of time and effort.

    Finally, Haskell is statically typed but type inference means that you can generally treat it as if it were a dynamically typed language.

  3. Nice post. Some points you omitted:
    (a) IDE auto-complete/intellisense in general negates a lot of the verbosity arguments
    (b) nearly any bit of commercial code will be read a lot more times than its written – notations with explicit or static types make comprehension an order of magnitude easier
    (c) Its extremely rare that minimizing amount of time to write a bit of code is a realworld objective. In my career so far most perl scripts that were bashed together end up needing a migration path to being rewritten as production code (normally in Java/C#)
    (d) statically typed notations are easier to build tooling on top of – again see autocomplete/mtellisense/navigation/typed search/refactoring features
    (e) optimisation of statically typed notations is often easier (or even just possible) because you know more at compile time.

    In the other direction
    (d) Typing everything sometimes is quite hard or complex (see generic type hell)
    (e) Type inference systems don’t currently have good tooling for debugging large structural types.

    Gilad Bracha was a prominent voice from the Java camp who advocated optional typing as a middle way. This has been picked up a little in C# through the dynamic keyword for run-time typing and the var keyword for local type inference.

    My own opinion here is that throwing off the shackles of static typing and freestyling all the way is never the right choice for systems. In fact, the only time I’ve found it appropriate is for “augmented-manual” work – e.g. when you have a unique bit of analysis to do and want a /truely/ throwaway bit of code

  4. In case anyone’s as confused as I was by Emma Watson standing in for Richard Dawkins: http://images.google.com/images?q=emma+watson+dawkins

  5. Thomas Stegmaier

    Yes, yes, yes! I so agree that _both_, statically and dynamic, are needed – and would wish for a project to be able to gradually transition to statically typed as it grows and matures (especially for the lower-level parts, like libraries).

    The one language which I use much (but do not always enjoy too much), PHP, can actually do this to a limited degree:

    function do_something(my_var) {…}

    … works. You can force the given variable to be of a certain class:

    function do_something(MyClass my_var) {…}

    The Interpreter will then throw an exception, which fits nicely with the “die early” principle.

  6. Aric Caley

    To some extent, PHP does this. You dont have to declare variables, but you can declare them global or static. You also dont have to specify types in methods but you can put in a type hint.

    Now, for other reasons than preventing bugs, I have implemented static typing in my objects by adding “annotations” in the class definition, which are parsed via reflection at run time and used for things like validating input data, formatting for the database, etc.

    But yeah, I’ve always been torn on the issue, having started (serious programming, anyway) with C and C++ but then switching to PHP, javascript etc.

    It would be interesting to design a language that truly supported both methods by choice..

  7. I think you are missing the main points of static typing:

    1) It makes reading foreign code easier (thereby helping future maintenance and evolution)
    2) It enables automatic refactoring and tool support

    For example: “def qsort(a)”

    Can I pass a list there? Or an array? Or a set? Or an enumeration? Or all of the above?

    When you tackle a code base that you did not write, this kind of question arises all the time and static typing helps tremendously.

  8. More complex the application, the more static typing is beneficial. Small, low-complexity applications are better off with dynamically typed languages. Just MHO, after 30 years of professional software design/development experience in companies ranging from 1 employee (self) to 3000. Personally, I prefer strongly typed languages because it helps in thinking about the use of the data you are typing, at least for me.

  9. This is one of the things I love about Objective-C; it lets you specify types and have the compiler check them until it makes no sense to do so. Though being dynamic is not much less verbose in this case.

    I find the language yields well to my style of thinking.

  10. I can see a big problem with a static / dynamic typing switch in a programming language:
    It incurs overhead or leads to a de facto paradigm.

    The Perl community’s use of “use strict” for non-trivial programs creates two kinds of overhead.
    1) Mental overhead (“Is this a non trivial program?”)
    2) Language overhead (“We have to support both dynamic and static typing, even though the default is the one over the other in most cases”)

    In all honesty, if a project calls for a specific paradigm or has specific constraints, I rather pick a language that meets the project criteria, and take advantage of the optimizations already incorporated into the language itself.

    What I’d rather like is being able to mix and match languages, rather than paradigms. For example, using Java where strong typing is essential for some reason, and using JRuby on top of Java when flexibility or terseness is important, switching from the one to the other where necessary.

    A good example would be game engines: a core written in C/++ where the performance matters (graphics, networking, etc.), with a scripting language on top where the additional expressiveness matters (cutscene scripting, AI routines, scripts for events).

    It also allows, nay, forces specialization: C coders have to be good at low level, high performance computing, while level designers can focus on creating a narrative in a natural(ish) DSL, and nobody has to work outside of their area of expertise. The risk is, of course, becoming a one trick pony, but that can be overcome.

    TL;DR: Use the right tool for the job, and be multi-lingual.

  11. Consider what happens when you want to access a member of an object: a Java IDE can, assuming your code isn’t too messy, look up the type of the object you’re accessing (or the return type of the function you just called) and pop up a list of members. An IDE for a dynamically typed language can’t do that, unless you’re in the habit of adding type annotations to everything (at which point you’re back to the Java way). This means you spend a small but potentially crucial amount of time scrolling back to remember whether a method was called “meters2feet”, “metersToFeet”, “feetFromMeters”, etc. (even in code you wrote five minutes ago, if my own experience is typical). Moreover, since this happens again and again (one second thirty times a day as opposed to one minute two or three times a week), task switching overhead comes into play, and this can have devastating effects.

    In my opinion this moves the crossover point closer to the £20-book example than you might think.

  12. Roger Pate

    Python 3 added function annotations which are a step in this direction of optional static typing.


  13. Phil Toland noted:

    Finally, Haskell is statically typed but type inference means that you can generally treat it as if it were a dynamically typed language.

    Yes, I was planning to say something about type inference in the post, before realising I don’t know enough about it to do that without making a fool of myself. The more I think about Haskell, the more sure I become that it’ll be the next language I learn after I’ve got the hang of Scheme.

  14. Trying to ignore the static-vs-dynamic debate, but one nice thing that can make the “overhead” of static languages a lot nicer is type inference — it’s why I love ocaml so much. You don’t need to specify types, but when things get funky, you can add them when you need to.

    I’ve yet to delve into dynamic languages myself; definitely on my todo list :)

  15. I am truly horrified to learn (from Thomas Stegmaier) that of all the languages out there, the one that seems to approach my optional-static-typing dream is … PHP? I ask you, what is the world coming to?

  16. Cedric wrote:

    I think you are missing the main points of static typing:

    1) It makes reading foreign code easier (thereby helping future maintenance and evolution)
    2) It enables automatic refactoring and tool support

    For example: “def qsort(a)”

    Can I pass a list there? Or an array? Or a set? Or an enumeration? Or all of the above?

    In Ruby, and other duck-typing languages, the answer is generally “whatever makes sense”. For people who are used to statically typing everything, it’s hard to appreciate just how well this works and how easy it makes things. It reduces the amount of code that needs to be written by a huge degree.

    (Let me be clear here, again, that I am not saying that dynamic typing is therefore always the correct answer. But the answer to this particular question is very much a point in favour of, rather than against, dynamic typing.)

  17. To me, static typing is a lot less about safety than it is about readability.

    When I approach a function in a dynamically-typed code, I often have a hard time understand what it can operate on and what it returns – and consequently, a hard time understanding what it does. Statically typed code is far more readable, at least for me.

    The only case I prefer dynamically typed languages is when they are also weakly-typed: if types can anyway be implicitly converted to other types, associating a type with each variable is indeed unnecessary.

  18. > In Ruby, and other duck-typing languages, the answer is generally “whatever makes sense”. For people who are used to statically typing everything, it’s hard to appreciate just how well this work and how easy it makes things. It reduced the amount of code that needs to be written by a huge degree.

    You are wrong. Go read on the concept of structural type systems please.

    Not every statically typed language is as moronic as Java, and Java shouldn’t be a measuring stick for static typing. Because it would be a very, very bad one as Java is a bad language with an awful compiler and an utterly terrible type system.

    The whole post is, in fact, utterly terrible for that precise reason: the yardstick used for static typing is Java, and Java is garbage. Therefore the conclusion is along the lines of “static typing ok for some cases, but in general it’s no good”. What a surprise.

  19. I think when you talk about static typing, you actually mean “type inference”. For example, ML does type inference. That means you don’t have to declare the type of a variable before using it. However, you can optionally put typing declarations for clarity. Sometimes the ML compiler yells at you if it cannot infer the type at compile time. That’s when you put in some types to clear up any ambiguity. So I think the gist of your article is every language should support type inference to some degree.

  20. Brian N Makin

    Common Lisp uses strong dynamic typing. This means that the type is dynamic but still strongly checked by the compiler. You can also “optionally” declare the types for additional safety and optimization opportunities.

    IMO strong dynamic typing gives you the best of both the static and dynamic worlds.

  21. Mike: you missed the comment about C# having exactly this. Basically they have a new magic type called “dynamic” that allows you to call any method on it, with the risk of having it fail at runtime:

    dynamic foo = …;
    foo.bar(); foo.baz(); // whatever

    I’m not sure if they support a method missing/no such method mechanism like Smalltalk and Ruby.

    A professor of mine used to say about static typing mavens “so what are they trying to proof in the end? That the program is correct? Good luck.”

    Having said that, there are a lot of advantages to static types.

    Are you aware of the Go approach to this? The language has interfaces like Java, and every object that implements the protocol specified by an interface can be “auto cast” into that interface. Sort of like static duck typing.

  22. @Cedric: your points are unfortunately easy to counter, if *overly* verbose typing is used. reading a hugely verbose grammar takes just as long as parsing through code to infer what it’s doing.. it’s really the machine that has an easier time analyzing the program when via static analysis.

    moreover, your qsort example is ridiculous simply for the fact that you are assuming there will be no tests or other documentation. usually, tackling a code-base is difficult because of design issues – not missing type information. if you cannot understand what something is for, typing will not help you much. you’ll still have to see how it’s being used regardless of all the type-related stuff that’s thrown in. it’s a small part of the overall problem.

    bottom line is that static typing is useful for machines, and experienced programmers working in IDEs that offset the typing necessary to decorate everything.

    a hybrid approach can get the best of both worlds, but also the worst of both worlds. active study is necessary to determine which of the two cases will take place.. otherwise, you’ll likely just choose something under assumption that it will suit your developers better than the alternatives.

  23. Jon Bodner

    You should take a look at the work being done on Groovy++. It’s an extension to Groovy that allows you to declare that certain classes or methods require type declarations. Besides the documentation advantage, it provides far better performance than straight Groovy (which allows type info to be specified, but does dynamic lookups, anyway, IIRC.)

  24. I think the reason static typing is so popular is that it is a natural extension of C’s types – In order to be able to use the same operator syntax (+,-,*,/) for both floats and ints, we need to be able to distinguish between their use cases. Since we don’t want to incur a run-time overhead, the compiler uses a variable’s type declaration to tell how to translate operators into low-level instructions and then typing information need not be in the compiled program.

    This is the correct solution for sheer performance, but it falls short for the object-oriented paradigm. On the other end of the spectrum, we have Smalltalk-style syntax, where overloading is very flexible. The consequence is this approach is this: the basic elements of meaning are method selectors. Which means in order to understand what code means, you need to understand what a given method selector is meant to represent, and without formal Interface documentation, polymorphic code can be hard to understand.

  25. Jon Bodner

    Huh; looks like WordPress ate the plus-plus at the end of Groovy plus-plus (writing it out so it’s not swallowed again).

    [Mike: I fixed that for you.]

  26. In Ruby they often talk about duck typing, if it talks like a duck and quacks like a duck, it most likely is a duck. http://en.wikipedia.org/wiki/Duck_typing

    So if you specify in the documentation for your method what kind of an interface the passed in object needs to adhere to, any object can be passed in. And following along with this definitely gets easier with time. :)

    So in Ruby talking about an objects type isn’t really a good idea, Rubyis rather ask an object whether it can do what they want.

  27. Check out Scala. It’s a language that runs on the JVM.

    What I write here might not be 100% correct, since my experience with Scala is limited to having read a few interesting articles about it. But my understanding is that Scala lets the programmer choose when to type things statically and when to type things dynamically. The Scala compiler does type inference and implicit type conversions.

    http://www.scala-blogs.org/2007/12/scala-statically-typed-dynamic-language.html provides some examples.

  28. I’m a big fan of how you seem to be able to take what I am feeling and express it in an elegant post.

    I have no idea why Python doesn’t have syntax to force a type, or to automatically throw an error when you pass the wrong type to a function. I’m getting sick of having to write type checking code so frequently.

  29. It’s interesting that a couple of commentators raise IDE issues with dynamic languages.

    It strikes me that a lot of the problem there is that the major IDEs tend to be based around compiled languages. It’s relatively easy to add in a new compiler backend and a new language syntax file into this well understood structure.

    In contrast, look at Smalltalk, where many implementations are based around the idea of editing objects within the Smalltalk runtime itself – there is less separation between the IDE and the running program – the IDE can directly introspect objects themselves, rather than applying rules to text files.

    The downside, of course, is that the IDE is locked into a specific runtime.

  30. Anonymous Coward

    you probably know that already (and it’s beside the point of your post): if you run your program with “-w” Ruby will print a warning if you attempt to use a variable that has not been assigned to before.

  31. I think the main problem with languages like Java and C# is not so much strong-typing, as type redundancy. This makes it less readable, not more, than dynamic languages, and there’s little you can do about it.
    Compare this javascript
    validateResult(result, [
    condition: (this.returnDate != null && this.returnDate < this.departureDate),
    id: 'returnDate-button',
    failMsg: "Your return date occurs before your departure date"
    condition: (this.originSelected.code == null),
    id: 'originSelect-button',
    failMsg: "You have not selected a valid origin city"

    with the equivalent in Java
    validateResult(result, new Validation[]{
    new Validation((this.returnDate != null && this.returnDate < this.departureDate),
    "returnDate-button", "Your return date occurs before your departure date")
    new Validation((this.originSelected.code == null), "originSelect-button", "You have not selected a valid origin city")
    The redundancies in this case were the unnecessary repetition of the Validation type. Unfortunately, no matter what you do to clean the code, it'll remain less readable because of these types of redundancies.

    I really like the idea of type inference, because it gets rid of a lot of this redundancy. For example,
    List names = new ArrayList();
    var names = new ArrayList();

    With type inference, this could have been shortened to:
    validateResult(result, new Validation[]{
    (this.returnDate != null && this.returnDate < this.departureDate),
    "returnDate-button", "Your return date occurs before your departure date")
    (this.originSelected.code == null), "originSelect-button", "You have not selected a valid origin city")
    This still isn't as readable as the Javascript above (where the dynamic typing allows you to create the adhoc structs), but it's still cleaner than the Java. If you wrote this in JavaFX (which borrows heavily from Javascript & Java), it's nearly as readable (and quick to write) as the original Javascript:
    validateResult(result, [
    condition: (this.returnDate != null && this.returnDate < this.departureDate),
    id: 'returnDate-button',
    failMsg: "Your return date occurs before your departure date"
    condition: (this.originSelected.code == null),
    id: 'originSelect-button',
    failMsg: "You have not selected a valid origin city"
    Marrying the two seems to be the way to go because its much faster and concise to write, is arguably more traceable than dynamic languages alone, and you've still got the strong typing safety net.

  32. Irony of ironies, I most often come up against accesses of unused variables and type errors in error reporting code that is infrequently executed and poorly maintained.

  33. The stairway metaphor was funny, but very wrong. It makes the assumption that static typing slow you down, and this is simply incorrect. As other commenters pointed, the few extra keystrokes are irrelevant. There is some extra code “noise” but this is highly debatable – on the flipside, code becomes easier to read, you don’t have to guess what things mean from the context. The only advantage of dynamic-typed languages is their flexibility, remarkably for advanced metaprogramming (MOPs) but this is a completely different discussion.

  34. Dennis Decker Jensen

    Your point is valid as long as we only look at the type systems of Ruby and Java, but the world is _much bigger_ than that.

    For example, not only do we have languages (like Go and Perl6 and others) that more or less add typing to help the programmer, we also have languages that have advanced type inferencing, starting with Hindley-Milner type inference (SML, OCaml, Clean, Mercury, and Haskell), which add more advanced typing going toward System-F in some cases.

    And then you mention enterprise culture versus small shop culture, but there are many small shops which go _faster_, because they are using advanced type systems in Haskell, OCaml, F#, Clean and what not. Testing with a toolkit like QuickCheck (Haskell, Erlang) is like unit and regression testing on steroids.

    And then as far as IDEs go, Smalltalk still has the upper hand in tools (refactoring and otherwise), because the IDE is integrated with the language, and this advantage transfers over to similar systems (Lisp, scripting languages, etc.), so even there static types are not the panacea it is made out to be. Javascript has some issues that makes static analysis hard, but they haven’t much to do with the lack of static types.

    And so forth. There is simply too much to mention.

    It is an interesting discussion, even though it is often watered out by ignorance; there is much more to be said than just comparing Ruby and Java.

    It looks like that there is much deeper gap elsewhere, which has more to do with how well one can deal with abstract concepts, no matter what type system is used.

  35. Type decls are nice when reading code, and type-checking really help in langs like C where the wrong type can easily cause a crash.

    But in any language, you need testing because type decls don’t know when you meant 1 or 2 or other behavior. I do a lot of C/C /C#/Java, but when I did TDD in Python I found errors quickly and didn’t miss type decls.

    Ironically, I found type-inference in C# nice for writing code, but not nice for reading code. Particularly when complete expressions were used to init a “var” decl. It also got in the of manual refactoring, since type decls are required to declare a function, but what’s the type of that var I’m going to pass into it?

  36. There is a joker in the pack if you try to apply cost/benefit analysis to decide how risk-averse you should be: security. An ‘unimportant’ piece of code could compromise a much bigger system that really matters. This is, of course, one of those broad and ‘wicked’ problems for software development, of which the type-safety issue is merely one small case.

    You suggest that static typing, and other practices intended to reduce risk, are more prevalent in older companies. I sometimes wonder how many promising software-based start-ups fail to make it in the long-term, because they lose control of their code base, and especially of the semantics of that code. As some other commentators have noted, explicit typing can help in understanding a large system’s code.

    On the other hand, static typing can give only weak assurances of correctness, and only begin to capture a system’s full semantics, which is why it is possible to argue its merits either way. The program-proving techniques you discussed in earlier posts have both higher costs and greater potential benefits, and industry’s choice here has been overwhelmingly single-sided, though one could argue that this decision has been made largely in ignorance (an ignorance not only of the issues but also of there being a choice at all.)

    Finally, I am skeptical of the appealing but simplistic notion that, under duck typing, software does the appropriate thing. It may well proceed where a different approach would raise an error, and that may often be appropriate, but ultimately, appropriateness is a semantic issue that is beyond the purview of a language’s execution semantics to determine. Duck typing can certainly lead to elegant solutions, and elegance can sometimes help in being correct, but I wonder how often subtle errors go unnoticed because duck typing has papered over the cracks. Elegance is neither a guarantor nor a substitute for correctness.

  37. Duck Typing is a pretty dangerous practice overall, here is an article that explains why:


    As for the Smalltalk IDE: its refactoring capabilities were certainly ahead of its time back then, but they are extremely primitive compared to what Eclipse/IDEA do today. This shouldn’t come as a surprise: Smalltalk is a dynamically typed language, so a lot of automatic refactorings are simply *impossible* to perform with assistance from the developer.

    More details here:


  38. Of course, I meant “without assistance from the developer” above.

  39. I actually like dynamic typing, I think it is absolutely great for throwing code together and getting it to work.

    The place that static typing shines is when dealing with big ugly interfaces. For example, things like Webkit, or Quartz, or OS\360 TCAM. When groveling over the library specification and trying to figure out how to get the library to do what I want, there is nothing like knowing from the declaration that this call or method invocation takes a gwamp, a wamp and a bamp and returns a samp and a damp. Now, if I can just take the gramp and pamp I have and get the relevant gwamp, I’d be all set. I seem to always be doing this. Maybe I should write a program to help me solve this kind of puzzle like a crossword or anagram helper.

    This isn’t really about the language requiring type declarations, but rather having the API documentation specifying what each piece wants and what it will give. (Having this kind of API specification is important even when the coding language doesn’t care about types.)

  40. Pingback: links for 2010-06-06 | GFMorris.com

  41. “On the other hand, static typing can give only weak assurances of correctness, and only begin to capture a system’s full semantics, which is why it is possible to argue its merits either way.”

    The whole point of object-oriented programming was to allow the programmer to extend a language’s type system so that it can capture a system’s semantics.

  42. Jeff Dege writes:

    The whole point of object-oriented programming was to allow the programmer to extend a language’s type system so that it can capture a system’s semantics.

    That seems like a dangerous delusion to me. The idea that a type-system can capture semantics sounds more like the aspirational goal of a wildly optimistic mid-1970s research project that it does like any actual program I or anyone else has ever seen.

  43. Dear Sirs, an interesting article as always. I have a couple of minor quibbles. Firstly, Dawkins isn’t a fundamentalist, for any useful definition of the word fundamentalist. That Dawkins, always blowing stuff up. The rotter. Secondly this entire topic goes out the window when cpu cycles are on the line, which you dismiss a little quickly in your article. For instance if you need speed (games, simulators, modellers, renderers, art packages, compilers, ie all the good stuff) then this argument is dead before it begins. Static typing and a manly language like C is the way to go. Plus if you turn up to a job which makes this kind of software dragging your hyper lisp, your Spangle 5, Turncoat !pling, Quartz, Futtock DER or any other kind of obscure (ie useless) language you will be gently beaten and forced to relearn a C derivative.

    Incidentally, great comments as usual. I’d not heard the phrase duck typing in ages (probably looked at the idea briefly in the past, laughed, and moved on). Shame something that prone to causing errors hasn’t died along with every other rotten idea thought up in a drug fuelled haze during the 1970s. Man. Mike Taylor, did you mean to say that Jeff Diege was dangerously delusional (perhaps) or that oops was? (definitely)

  44. “That seems like a dangerous delusion to me.”

    I don’t think I disagree with you, but many of the “rah-rah, OO is going to save the world” articles I read, during the initial popularization of OO, expressed themselves in just such terms.

  45. Jeff Dege clarifies:

    “That seems like a dangerous delusion to me.”

    I don’t think I disagree with you, but many of the “rah-rah, OO is going to save the world” articles I read, during the initial popularization of OO, expressed themselves in just such terms.

    OK, Jeff, I get you now — the key word in your original statement was “was”, as in “The whole point of object-oriented programming was to allow the programmer to …”, without claiming that OOP actually did achieve this. Sorry for misunderstanding you the first time around.

  46. Hi, Jubber, thanks for the kind words. I am pleasantly surprised that we got up to 43 comments before the first Dawkins defence, and that even then it was such a polite one :-) No, he doesn’t blow things up; neither does Kent Hovind, though, and I don’t think many people would quibble with the description of him as the f-word.

    Efficiency, on the other hand, is very interesting. Let’s admit right up front that statically typed languages tend to be more efficient than dynamic, perhaps realistically by a factor of 2 or so, but let’s push the boat right out at allow a whole order of magnitude. My question would then be: why does this still matter? Moore’s law has been pushing up along quite nicely for many decades now, and the result is that even if a game that I write in a dynamic language now is ten times slower than the same game would be in a static language, it’ll nevertheless be a hundred times faster than Quake was when I first played it back in 1995. (Quick maths: 2010-1995 = 15 years; Moore’s law doubles speed every 18 months = 1.5 year, so 15 years gives 15/1.5 = 10 doublings; 2^10 = 1024, call it 1000; throw away a factor of ten for using a dynamic language and I should still be 1000/10 = 100 times faster than Quake.) But we all know this is not the case. Why not? I truly don’t know.

    And if you want to feel really bad about this, consider the appalling performance of modern Netscape releases on 3 GHz machines compared with how clicky Netscape 3 used to feel on 33 MHz machines (i.e. once more literally a factor of a hundred). Where is all my speed going? I’m not talking about when a browser’s limited by network bandwidth, which has grown more slowly, but simple CPU-only operations like increasing/reducing font-size or even tab-switching.

    Uh oh … I feel a new blog entry coming on :-)

  47. “I don’t think I disagree with you, but many of the “rah-rah, OO is going to save the world” articles I read, during the initial popularization of OO, expressed themselves in just such terms.”

    I suppose you refer to the late-80’s and early-90’d, with technology like C++ and Eiffel. Remember that these were already a second-generation OO, and one that already deviated a lot from Smalltalk (that many people consider the gold standard of OO up to this day, and the inventor of OO although there were some precursors like Simula).

  48. In my field (games) software performance has not scaled with hardware performance – the reason for this is obvious. On a BBC Micro (C64 equivalent, American chums) two people could wring every last byte of information and trickery from the 32K machine in a year of coding. By the Playstation era Squaresoft needed 150 people to do 90 percent of the same thing with the PS1. And it took five years of the PS1’s lifecycle before people began to hit its software performance limits. Diminishing returns, larger coding teams, more complex software backbones, far more complex hardware, C++ and garbage collection take care of the rest. You mention Moore’s Law – well that’s a nice gentle curve over time. Come in close and consoles have lifespans, so its really big digital jumps every few years. If an interpreted or dynamic language loses me 10 percent or more of what I can draw in the 17 millseconds I have to play with, that’s 10 percent less polys, less sprites, less everything. An entire sprite layer. My whore won’t look as pretty as the girls the pimp down the road is pushing, even if she’s just as much fun to play with. :-)

  49. Jubber, I can see how the matters you mention could explain a factor of 5 or 10 or even 20. But 100?

  50. Peter Lund

    “A much heavier one that”

    You didn’t really mean that, did you?

    (And if, as I hope, you didn’t, then you didn’t mean it on at least one one blog post.)

    [Mike: Fixed now — thanks for spotting it!]

  51. Well perhaps you would be 100 times faster than Quake – but what does that really mean? Quake used very low textures at a very low screen resolutions. Plus very low poly models. 16 polys per sub model with 16 by 16 textures on a 320 by 256 screen. (486 era? I may be out by some factor on the version you played) A model in crysis might have thousands of polys, 512 by 512 textures, 1920 by 1080 res with four times the colour depth (101 times the workload just for that resolution jump) – and then you start adding in multiple screen redraws for feedback, lighting, shadows, HDR, then 3D sound, massive worlds, more advanced AI, all sitting on a far larger OS and DX backbone… In other words, 100 times faster running code isn’t nearly fast enough. If I have understood the point you are making.

  52. OK, Jubber, you have a point when it comes to increasingly realistic Quake-like graphics. It is indeed easy to see how the processing needs of that kind of drawing has kept pace with Moore (although isn’t that stuff mostly done in the graphics card now? I’m still not 100% sure what the actual CPU is doing all that time.)

    I guess the example of Web browsers exercises me more. Partly that’s because browser responsiveness seems to have gone backwards in the last decade and a half; but maybe more importantly, it’s because I know more or less what it takes to build a browser, and could imagine doing so myself, whereas I know that I couldn’t make an advanced Quake-like game.

  53. About two hours ago, I was head-deep in an enhancement to a PHP site that I’ve somehow inherited.

    I had occasion to look into the authenticate() function, which is called from the login page with username and password.

    It begins:

    function authenticate($loginname, $pass)
    $username = sqlclean($username);
    $pass = text2pass($pass);

    The sqlclean() function cleans up data entry fields, to prevent sql injection, etc. text2pass() hashes the password.

    If you notice, the variable on which sqlclean() is being called is not the variable that is being passed into the function. In fact, the $loginname variable is not being guarded against sql injection at all.

    There is simply no way that a language with static checking would allow this error to get by. And this isn’t the sort of error that is likely to be caught by anything other than the most exhaustive automated testing.

    I’ve worked in languages with dynamic typing, and in languages with static typing. I can’t say I’ve found dynamic typing significantly faster, except in a few specific problem areas, where I’ve found the sort of programming errors that static typing catches to be all too common.

  54. Brian N Makin

    I also believe that explicit static interfaces help in communication. When you have teams which are distributed in space and or time (especially time) the static typing can really help.

  55. Nathan Myers

    Mike, you have so much to learn for someone your age. I’m proud of you for making the effort.

    Starting with trivia: the next version of C++, coming directly to a compiler near you, supports type inference as in ML and Haskell. You just say, e.g., “auto i = expr;” and the type of variable i is the type of the expression expr. It has similar syntax for functions and for a few other situations. Microsoft swiped the syntax and pushed it out, in C#, ahead of time.

    Next, the true value of annotating type information isn’t safety or and it isn’t documentation. It allows you to have the compiler automatically do the right thing based on the type. Most trivially, if you call sort() on a list, you get a list sort, and on an array you get an array sort, both correct and optimized. If your language won’t use the type information you give it for that, it’s a cruel joke. (E.g. C# or Java.)

    That’s not to say that improved safety and documentation are minor benefits. That’s also not to say that all those benefits are always worth the effort. In small, easily tested programs — the correct domain of Ruby — it can be a tough sell. The danger is that small programs grow to large programs.

  56. Anonymous Coward

    Point up for the static-code-is-easier-to-read argument: I’ve been learning a PHP codebase for about a month now, and it has been by far the biggest waste of time to try to find where random variables like $usr_data are assigned so I can hopefully figure out what it is, and therefore what that function does. Utterly frustrating.

  57. Nathan: I don’t want to be as impolite as you are, but if you are pontificating like that, you should have your facts a bit better together.

    Taking your example of the superior performance choices a compiler can make based on type annotations: in a dynamic language, the operation of sorting a list will be just as efficient for different data types as in a static language, with the exception that the decision of which algorithm to pick will be made at runtime, not at compile time. This might result in a slight overhead, but given the complexity of a sort operation, it will be insignificant. So your argument is misinformed.

    Also observe that even in most statically typed languages, you will get into a situation where you need a dynamic dispatch. That is after all what polymorphy is about, and why e.g. C++ has a vtable. You argument won’t even hold for a global sort function (instead of an object member function) dispatching on static argument type, as you can see e.g. from the implementation of Java’s Collections.sort.

    The point of dynamic typing is that there are always a lot of interesting programs that cannot be expressed within the bounds of a static type system. Languages like C++ or Java get around that to some degree by having casts, but even then you end up with things that are not easily expressible using their respective type system.

    Whether that class of programs is large enough and beneficial enough to your problem domain is the interesting question when choosing a dynamically or statically typed language.

  58. Nathan Myers offers:

    Mike, you have so much to learn for someone your age. I’m proud of you for making the effort.

    Well, Nathan, I am impressed that you’re able to be so very patronising in so few words. I hope that talent comes in useful some day.

    Next, the true value of annotating type information isn’t safety or and it isn’t documentation. It allows you to have the compiler automatically do the right thing based on the type. Most trivially, if you call sort() on a list, you get a list sort, and on an array you get an array sort, both correct and optimized.

    This is of course absolutely nothing to do with static vs. dynamic typing — it’s a matter of strong vs. weak typing, which is quite orthogonal. Having “the compiler automatically do the right thing based on the type” is of course the very essence of duck-typing, which is about as dynamic as you can get.

  59. Gosh, it would be splendid to have somebody like Nathan Myers working on a team – somebody the younger coders could turn to for advice and gentle assistance can be invaluable. :-)
    That aside, I’m interested in the point you make about browsers seeming to get slower over time. Now it’s not my field, I’ve only made a few websites, but some of the same things that apply to games might apply here. Firstly try running an old browser on a pentium 3 – you might find that memory doesn’t match reality. But beyond that, old browsers loaded early-internet text heavy html with no style sheets, php, security stuff or flash animations. GIFs were optimised for post-text loading back then – nobody bothers now. Adverts barely existed back then. Perhaps that accounts for some of it. CSS is significant because I’ve seen loaded pages refresh just after the initial load and juggle elements around to fit the css. At least I assume this is what is happening. This causes a perception of slowness. Chrome caches pages as bitmaps to hdd – they actually redraw more slowly than a full refresh. Very stupid and very slow until we all have ssds. Perhaps a games programmer and web programmer should get together to create the ultimate responsive web browser. How’s your assembler? :D

  60. Seems like Haskell has a Dynamic Typing module for those situations where you really need it, which you don’t.

  61. @Nathan: “Starting with trivia: the next version of C++, coming directly to a compiler near you, supports type inference as in ML and Haskell. ” – just to be picky, this statement is incorrect or at least very imprecise, because C++’s type inference won’t be anything _even close_ to what is supported by Haskell and ML (i.e. a full-blown Hindler-Milner typesystem). That kind of type inference goes much further than simple scenarios like “auto i = expr”, which are now becoming commonplace in popular languages with more conventional static typesystems like C#, C++, JavaFX Script, etc.

  62. It might be worth noting that even Java 6 has some very limited type inference. For examples, look at Collections.emptyMap() or the Google Collections Library. You can write something like:

    Map map = Collections.emptyMap();

    Where the compiler infers the generic type arguments to the emptyMap() function.

    @Liam: I’ve never programmed either ML or Haskell, or any other language with strong type inference. However I’m certain that there are situations which cannot be expressed in any type system. For example more or less anything that involves methodMissing/noSuchMethod. Again, whether that is a valid use case in your domain is up to you.

    There is this famous quote “every reasonably complex system ends up with its own half-assed, incompatible, broken LISP implementation”. Maybe every reasonably complex application ends up with its own cumbersome way of implementing dynamically typed parts…

  63. @Martin Probst: “Also observe that even in most statically typed languages, you will get into a situation where you need a dynamic dispatch. That is after all what polymorphy is about, and why e.g. C++ has a vtable. You argument won’t even hold for a global sort function (instead of an object member function) dispatching on static argument type, as you can see e.g. from the implementation of Java’s Collections.sort.”

    This is incorrect, I I read you correctly. The Collections.sort() methods don’t do any kind of runtime dispatch. The only “dispatch” that happens is compile-time, purely static – only overloading, not polymorphism. And because Collections only contains two sort() methods (only for List), I think you meant Arrays.sort()… check its source code, it’s just quicksort & mergesort algorithms that are replicated many times for all supported array types. No polymorphism at all.

  64. @Osvaldo: my memory tricked me, what I was referring to is Collections.binarySearch(Collection, T), which does a dispatch based on the collection type (two algorithms, one special cased for RandomAccess lists).

    However I think you’ll agree my argument holds. There is nothing forcing a dynamic language to choose an inferior algorithm, the only thing potentially slowing it down is the number of choices (= dispatches) you have to make at runtime.

  65. Nathan Myers

    Martin, Mike: Yes, in runtime-typed languages you can dispatch on the type, but only at runtime. As a consequence, you can’t use type information to control code generation. It may not be clear why this matters so much. Indeed, it was not clear to the people designing C++ until Erwin Unruh walked into a meeting holding up a program that generated a list of prime numbers in the error messages emitted by the compiler.

    You might think that was just a cheap stunt, but it was a very short step from there to C++ programs being routinely faster and more readable than the equivalent Fortran program, where the equivalent C program is almost always slower and longer. (And, yes, the Matlab program is also slower.)

    You can intone “slight overheads” all you want, but I’m talking about multiple orders of magnitude. When programs take weeks to run instead of hours, or a hundred servers to carry the load instead of one, slowness gets hard to distinguish from immorality.

  66. Nathan Myers says:

    You can intone “slight overheads” all you want, but I’m talking about multiple orders of magnitude

    Not if Martin is right (and I think he is) that the overhead of dynamic dispatch is some small constant number of machine instructions per method call. That is indeed a “slight overhead” — the difference between O(n) and O(n+1), if you like. How can you get that to account for multiple orders of magnitude?

  67. @Mike: “Not if Martin is right (and I think he is) that the overhead of dynamic dispatch is some small constant number of machine instructions per method call. ” – wrong. This “wisdom” comes from the old times – up to early 90’s – when compilers didn’t do any advanced optimization (compared to the current state of the art).

    Modern compilers can aggressively inline method calls, even for those that require polymorphic dispatch. This is remarkably true and powerful for dynamic JIT compielrs, such as Java’s HotSpot. Inlining in turn, will gain you not just another few instructions less (to pass parameters, setup a new stack frame etc.), but it gives the optimizer a much bigger chunk of code to perform further optimization like register allocation, loop unrolling, common subexpression elimination and many others. That’s what puts you in the ballpark of orders-of-magnitude faster code. And it’s one of the major reasons why dynamic languages are indeed sluggish when compared to static-typed languages (there are others…).

    Very recently though, dynamic languages started to benefit from the same advanced optimizations. The Self system is actually the big precursor, and it was a dynamic language (Smalltalk-family). Now we start to see optimizers like the latest JavaScript JIT compilers in good browsers (i.e. anything except MSIE) that can do these tricks for a dynamic language. But they can’t yet compete with static-typed languages. Let’s say the dynamic stuff is now just 10X slower instead of 100X, that’s already some improvement. ;-)

  68. I think Paul Graham once said that every program with more than a certain level of complexity contains a subset Lisp interpreter. (He made his money doing an online store builder in Lisp that he later sold to Yahoo.) Lisp, of course, requires no type declaration and assumes a dynamic environment.

    There are reasons for dynamic dispatch in solving lots of coding problems, and dynamic dispatch doesn’t have to kill performance if it is done right. Apple uses it in Objective-C and there was a good article on its efficient implementation at BBlum’s Weblog-o-mat.

    Lisp allows fully dynamic runtime typing, but I also remember that MacLisp was challenged to produce numerical code as good as the PDP-10’s FORTRAN compiler’s back in the 70s. It succeeded by allowing type annotations, doing type inferencing and then doing some good old fashioned optimization. (Each routine had an optimized entry and an interpreter friendly compiled entry, so you could get full performance from the compiled subroutine, but you could still call it from command level.)

    Yeah, this is a pretty incoherent comment.

  69. Kaleberg, I believe you’re thinking of Greenspun’s Tenth Rule of Programming: “Any sufficiently complicated C program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp”. (I wish I knew his first nine Rules.)

  70. If you want a dynamic language with optional static types, have a look at Typed Scheme. It’s distributed with Racket (formerly PLT Scheme): racket-lang.org
    It has a sound type system, does some inference, makes it easy to mix typed and untyped code and you have access to the whole Racket ecosystem.

  71. I don’t think he had another nine rules. In fact, he isn’t quite sure of when he promulgated his tenth:

  72. Thanks, Vincent. This is the kick I needed to get back up to speed in my grand learning-Scheme adventure. (That’s gone by the way a little over the last few weeks, overtaken by my Tolkien obsession and the Doctor Who blogging, but it will return!)

  73. Pingback: How correct is correct enough? « The Reinvigorated Programmer

  74. [quote]things like the C++ template system, and hilary ensues.[/quote]

    Or, in fact, hilarity? Or are we talking about a clinton dropping by?

  75. LOL @iWantToKeepAnon. That is quite a typo you spotted.

    Thanks for drawing it to my attention. Now fixed, sadly.

  76. (a bit late, but…)

    In Ruby / any of the more-dynamic languages, enforcing types is still possible, just not in the function definitions. You just ensure that whatever object you are passed is either a “.kind_of?(Class)” or “.respond_to?(method_name)”, whichever is most “correct”.

    I’d also submit my theory that another reason for SSP in large organizations is that they often do waterfall-style programming. Up-front definitions of classes, methods, and protocols is *essential* if you want to spread the actual production across 100 programmers of skill -1 to 11. In fact, simplify that: it’s all communication protocols. Without them, can you trust 100 people to work together, especially in a business setting where those 100 will probably not talk to each other?

    As to the function definitions, I’ve had plans for building a language to address this exact issue for a couple years now, but haven’t had time to dive into language-building. This adds a few wrinkles to my plans, thanks!

  77. The point of static type checking isn’t that types are checked, but that types are checked at compile time.

    Checking type inside the function, at run-time, is not equivalent.

  78. Pingback: Tutorial 12b: Too far may not be far enough « Sauropod Vertebra Picture of the Week

  79. Pingback: Closures, finally explained! | The Reinvigorated Programmer

  80. To the author: static typing is optional in Common Lisp (probably in Scheme too, but i’m not sure about Scheme). You can use ruby-like no-types while prototyping, and then specify types for your variables. There are probably as many lisp programming styles as there are lisp programmers, but this style is explained by Peter Seibel in Practical Common Lisp (www.gigamonkeys.com/book).

    Please, don’t comment with comments like lisp doesn’t have it or lisp is dead. It’s a mathematical concept brought to life in a form of programming language. Nothing to hate, really. Treat it as just another programming language.

  81. Scheme is like Lisp in that the only reason for type declarations are for documentation and possibly as hints for compiler. Of course, the whole point of Scheme was to investigate just what it meant to say something was a “type” or “class”.

    Lisp was an important and powerful language since it had no true syntax, just a representation of program data structure which was just lists. It opened a lot of questions about data type and control structure, and since it was so amorphous and malleable, it led to a lot of interesting answers. Scheme was developed to more formally investigate a subclass of the relevant questions, and anyone who has read SICP realizes that it has provided some interesting answers.

  82. I find myself programming in Ada as a language of choice quite often. It’s at the far end of the heavy typing spectrum; I find myself having to add names and explicit declarations for things that C++ would let you omit, and it’s really quite burdensome. (I won’t do C++ because I refuse to work in languages that can’t bounds check arrays, and Java doesn’t feel as powerful … and is a language of necessity.)

    My other language is Python, at the other end of the spectrum. I gauge from your comments it’s much like Ruby in the typing department. One of the reasons I choose Ada over Python for a project is speed; part of it may be static typing, but I understand that Python is slower then many of its competitors. But I also find that Python’s typing leads me into bad habits. It’s wonderful to sometimes say [0, “blah”, 3.14] without defining a record, but I ended up with a program where anonymous tuples were flying around, and stuff like

    if len (x) = 3 then year = x[2]

    existed. I used a named record extension to clean it up, but Python seems positively averse to letting you specify a type that has these elements and only these elements; even if m is a movie named record, m.atomic_weight = 1 is still a legal assignment and will add an atomic_weight part.

    I suppose the point of this ramble is that a statically typed language at its best will get you to make what you’re doing clear in the code, and at its worst a dynamically typed language can fight a clear statement of what’s going on (and certainly any easy way to guarantee it.)

  83. Yes, Python and Ruby are philosophically very similar as regards typing. I do agree it’s awful that these languages (and Perl) give you no way of optionally talking about types when you want to. I love it that they don’t force you to, but why on earth shouldn’t a Ruby method be defined, if you want to, as

    class Project
      def recruit(Employee e)
        # ...


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.