The long-overdue serious attempt at Lisp, part 2: is Lisp just too hard?

First, thanks to all of you who advised me on what Lisp to learn.  I found it really helpful to get so many different perspectives, and I hope those of you who advocated Common Lisp or Clojure will not be offended that in the end I chose Scheme.  (Actually, I bet you won’t be: one of the pleasant aspects of the comments on that article was the consistent sense that all the comments were of the form “I prefer this Lisp, but if you learn any of them that’ll be great.”)

A few people suggested that I should learn all the different Lisps rather than just picking one.  As a looong-term project, I guess I agree with that; but here and now, I need to pick one and run with it: and Scheme it is, for its minimality, elegance and purity.  I figured that if I learn Scheme it’ll be easier to move from that to Common Lisp later rather than vice versa.  (I was tempted by Clojure, too, not least because there is only one of it, but it feels just a bit too new-fangled to be a good starting point.)

So two days ago I started working my way through R. Kent Dybvig’s highly regarded book The Scheme Programming Language [amazon.com, amazon.co.uk].

The nice people at MIT Press sent me a review copy, which I am pleased about because I learn better from books than from websites; but for those of you who want to play along at home but don’t want to buy the book, its content is very helpfully online, free, at http://www.scheme.com/tspl4/.

About the book

I chose this book from among several that I was offered, in part because it seems to be modelled on Kernighan and Ritchie’s classic The C Programing Language, which I am a big fan of.  The title is an obvious nod, but several Amazon reviewers also made the point that it takes much the same approach, as does one of the jacket quotes:

“Kent Dybvig’s The Scheme Programming Language is to Scheme what Kernighan and Ritchie’s The C Programming Language is to C.  Dybvig’s book is either for the novice or serious Scheme programmer.

– Daniel P. Friedman, Department of Computer Science, Indiana University.

I’d have to guess that Daniel P. Friedman is one of those serious Scheme programmers, though, because speaking as a novice, I am not finding it particularly easy going.  In part, that’s no bad thing: it’s slow because it’s dense.  Although I am only 54 pages in, that’s because those pages are worth several hundred pages of a lesser book — there’s no fluff, a lot of new material on each page, and plenty of exercises.

On the other hand, there are a few places where it seems the author is taking too much for granted, probably because his own familiarity with Scheme culture makes it hard for him to recognise which concepts the rest of us are not already familiar with.  For example, the book talks a lot of about Scheme’s “syntactic forms” without saying what a syntactic form actually is.  The answer only gradually emerges from among the various examples, and it turns out that syntactic forms have nothing to do with syntax or with form.  I really could have done with being told that up front.

[For those who don't know: there's only one form of syntax in Scheme, the S-expression.  "Syntactic forms" turn out to be evaluation regimes for S-expressions.  For example, whereas in the most typical form, "procedure application", expressions like (cons a b) evaluate every member of the top-level list before applying the procedure, the expression (quote (a b c)) does not evaluate its argument because it uses the "quote" syntactic form; similarly, (or a b c d) uses another form such that it evaluates a, but then continues to evaluate b and the remaining arguments, in sequence, only if all preceding arguments were false -- which is why (or #t (/ 1 0)) does not raise an exception.]

How to read the book?

I am actually not clear in my own mind how to read this book.  I’ve said before that I like books precisely because they get me away from the keyboard so that I focus on what I am learning rather than, well, all the other good stuff that distracts me from learning when I have access to my email and Hacker News.  But it feels like I’m going to learn more from this book, and hammer home the unfamiliar ideas more emphatically, if I do all the exercises as I go through.  And because they are so liberally sprinkled, that means going back to the keyboard every page or two.

With the Rails book, which I’m reading in parallel with this, I’ve been reading a chapter at a time away from the keyboard, then going back over the same material with my computer, making all the application changes that the book describes.  That’s worked pretty well with something like Rails, where what I am learning is techniques and conventions; I am not sure it’ll work for Scheme, where I am learning concepts.  I think I might need that to be more interactive.

So at the moment I am thinking that maybe the thing to do is put the computer aside completely and just read; then, when I’ve finished reading the whole book, go back over the whole thing with the computer, doing the exercises.  That sounds long-winded, sure; but then what I am trying to do is no small thing — not just to learn enough fragments of Scheme that I can customise GNU Robots, but to fully learn, to master, the language — to become comfortable with the concepts rather than merely tolerating them.  And maybe it’s unrealistic to think that I ought to be able to achieve that high goal with a single pass through a 491-page book, however dense.

But in fact, I’ve come up with another way of reading the book — a yet more inefficient approach that will require me to make three passes!  And yet which will, if it works out right, perhaps get me to my goal more quickly.  (Hmm.  Sounds like I may be over-Fowlering the simple process of reading a book?)  Anyway, read on to find out about the three-pass algorithm.

About the language

So far I have mixed feelings about Scheme (and about Lisp in general, bearing in mind that I don’t have more than a cursory Lisp background, and that I am learning Scheme as a Lisp.)  The economy and elegance are already obvious even this early in the process — for example, there is certainly something clean about just using (define name (lambda (args) body))) rather than having a separate defun — but even discounting such trivial irritants as the fail function being called assertion-violation, there are … issues.

I suppose I must be frank: I hate all the parentheses and the prefix operators.  Yes, yes, I know those are Clueless Newb complaints.  I know all about how Lisp syntax is powerful because it’s general and how prefix notation is just as expressive as infix and all that stuff — I’ve read my Paul Graham articles.  I hesitated even to mention it here, but, look, dammit, no, it won’t do.

Let’s read some COBOL together!

MULTIPLY UNIT-PRICE BY QUANTITY GIVING TOTAL-PRICE

That is bad syntax.  It’s verbose and lumpen.  We all know that and (I assume) agree on it, and no-one defends COBOL’s arithmetic and assignment syntax as acceptable any more.  We all agree that it’s objectively better to say:

totalPrice = unitPrice * quantity

And yet somehow Lisp’s mystique is so great that we all feel obliged to behave as though we believed that it’s equally good to say:

(define totalPrice (* unitPrice quantity))

But it isn’t.  It just isn’t.  Syntax matters.

And that goes double for non-trivial arithmetic.  The very first exercise in the book is:

Exercise 2.1.1. Convert the following arithmetic expressions into Scheme expressions and evaluate them.
a. 1.2 × (2 – 1/3) + -8.7
b. (2/3 + 4/9) ÷ (5/11 – 4/3)
c. 1 + 1 ÷ (2 + 1 ÷ (1 + 1/2))
d. 1 × -2 × 3 × -4 × 5 × -6 × 7

Sorry, but that is just dumb.  Arithmetic isn’t something you should have to devote precious brain-cycles to, it should just be there.  The solution to part a is of course:

(+ (* 1.2 (- 2 1/3)) -8.7)

No-one better tell me that that’s not objectively inferior to the infix version.  (I bet someone will, though).  And it gets way, way worse once you start writing actual programs as opposed to mere arithmetic expressions.

I will return to the subject of Lisp/Scheme syntax shortly, but first I want to talk about the problem of …

Learning two things at once

I think I could cope with Scheme syntax.  I don’t say that I could ever learn to love it, but I think I could cope.  But of course that is the very small tip of the iceberg of learning this language.  In fact it’s basically trivial — the real issue is learning to think functionally.  That is a much bigger and more important matter; and more generally applicable, too.  When Eric S. Raymond wrote:

LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot.

He was not talking about learning to love parentheses, but about learning functional programming — learning to think about functions as entities in their own right rather than little gobbets of functionality clinging for dear life to the sides of classes.  The point of Lisp is closures, lambdas, continuations, list comprehensions, monads and all the rest of the bestiary of terrifying yet alluring functional concepts.  You know — all those concepts that your functional-programmer friends like to drop casually into the conversation to subtly remind you that you’re not as awesome as they are.  Closures I am happy with, thanks to Ruby; lambdas I understand with my mind but do not love with my heart; continuations I only understand as being “sort of like jmpbufs”; list comprehensions I can’t see the point of; and monads I … uh, lemme see now, I …  HEY!  WHAT’S THAT OVER THERE!!! [Points into the distance, turns and sprints for safety while you're looking the other way.]

This is big and important stuff.  Thanks to my closures-in-Ruby epiphany, I now understand that your functional-programmer friends really are more awesome than me in the sense that they have more tools in their kit.  In a frighteningly short time, I have flipped across into that segment of the population that wonders how you can ever get anything done in a language without closures (and, yes, Java, I am looking at you).  What if monads turn out to be similarly useful?  I don’t want to miss out on them.

But the problem is that I am trying to learn to love Scheme’s dumb syntax and its powerful abstractions both at the same time.  And as any good scientist will tell you, you can only really understand what you’re doing when you change one variable at a time, holding all the other constant.  So I find myself thinking: how can I learn only one of those things at a time?

Learning one thing at once

It’s obvious that there are two paths: learn Lisp syntax first and then add in the functional concepts; or learn functional concepts first and then learn the Lisp syntax.  At first I thought that the syntax-first approach was impossible because there is no language with Lisp syntax but imperative idioms — until it hit me that that’s pretty much a description of Emacs Lisp.  As Steve Yegge’s handy tutorial shows, the Lispiness of Emacs Lisp is rather skin deep, and it’s pretty unashamedly all about the side-effect.

So one approach for me to take would be to learn Emacs Lisp properly — properly enough to write a non-trivial extension, like for instance a major mode for editing ScottKit source files, and maybe for compiling and running the resulting games.

The opposite approach would be to take a language with actual syntax but which also supports functional concepts, and learn to use that functionally.  Specifically, my idea is to do all the exercises from The Scheme Programming Language in Ruby, and only then to go back and do them in Scheme.  And this of course is the the three-pass approach to reading the book that I alluded to earlier: once through with just the book; once more with the computer, learning functional programming by doing the exercises in Ruby; and then the third time, doing the exercises again but this time in Scheme.  (A useful side-effect of such an approach would be that I’d discover the hard way where Ruby doesn’t have the necessary support for functional programming.  But I know that it has closures, lambdas and continuations, and I don’t really see how list comprehensions are much different from a composition of closures, so I should be able to get some way before running into a brick wall.)

So far, I am not clear in my own mind what path I should take: learn to love Lisp syntax by writing Elisp?  Learn functional programming by doing Scheme exercises in Ruby?  Or just keep on ploughing through the book, continuing to do the Scheme exercises in Scheme, and hope that the syntax becomes tolerable as a side-effect while I am learning all the good stuff?  (The second approach is appealing for the same reason that I found it satisfying, when I was at school, to win the music prize by playing Hadyn’s Trumpet Concerto on clarinet.)

Advice will be welcome at this point.

I do have one final point to make, but I’ve decided to save that for a separate article, because it’s fairly self-contained and potentially a bit controversial, and I don’t want comments on that one subject to overwhelm whatever useful advice I might get in response to this entry.

So — advise me!

Update (a few hours later)

As I continue to work my way through the exercises (using Scheme itself at least for now), I run into a problem: for exercise 2.9.3 I need set-car!, which the purity-fascist maintainers of PLT Scheme removed in release 4.0.  No doubt there is a secret incantation to undo this change, but I can’t immediately see what it is.  Can anyone point me to it?

<rant>

I just hate this kind of thing.  It’s like the absence of goto in Ruby.  It’s very nice that language implementors give me “better” ways to do these things, but is it really too much to ask that they treat me like an adult and let me be the judge of how I want to write my own freakin’ code!?  Language implementors, please remember these very wise words: mechanism, not policy.

</rant>

About these ads

56 responses to “The long-overdue serious attempt at Lisp, part 2: is Lisp just too hard?

  1. Note that in “regular” programming languages like Java, all operations are prefix except for arithmetics. I.e. to call a function you have to write “foo(x, y)”, not “x foo y”. And that makes sense for the same reasons as it makes sense in Lisp, generality and clarity and order of evaluation in the absence of established norms (addition after multiplication).

    But I sympathise. I understand that it’s nice when your source code is actually just a tree written as an S-expression. But I really think syntax isn’t a bug but a feature in a programming language. It gives structure and recognisable patterns, making it easier to read the code.

  2. Read “The Little Schemer” and “The Seasoned Schemer”, and read & re-read The Scheme Programming Language.

    I’ve been at this “learning scheme” thing for a year & change now. And although I’m far from an expert, I can honestly say that Scheme becoming the easiest way for me to turn an idea into code.

  3. Doing functional exercises in Ruby is fine if you want to wrap your head around the functional stuff first, but if you want to master things like macros, you’re going to have to use Scheme itself (and get used to it’s syntax too).

    I’d say do them in parallel: do an exercise in Ruby, and then once you’re sure you right do it in Scheme (instead of doing a separate pass). After a few days, try getting rid of the Ruby training wheels and see if you can write straight Scheme.

  4. mmm what are the “objective” reasons of superiority of infix notation? I think is a question totally subjective… the notation is more familiar because we’re used to it from the elementary math classes. There isnt other reasons.
    (But if we look x=x+1 from this elementary math point of view, it have other interpretation: x=1/2.
    isnt strange the “x=x+1″ of prog langs?)

    Well if we’d add some identation to you example (+ (* 1.2 (- 2 (/ 1 3))) -8.7):

    (+  (* 1.2 
                (- 2 (/ 1 3)))
           -8.7)
    

    looks better than infix version imho (but more lines..)
    Another case:

    x=a + b + c + d + e
    (def x (+ a b c d e))
    

    what is better?
    This blog post have interesting thoughts about lisp and its syntax:
    http://www.defmacro.org/ramblings/lisp.html
    (other posts are interesting too)

  5. Dennis Decker Jensen

    Hello Mike

    Nice to see, you’re taking the plunge into functional programming and Lisp. I did so myself 10 years ago, learning both Common Lisp and Scheme, after getting enough hints and nods on the Ruby mailing list back then.

    I hated the parentheses at first, in fact I refused to buy a book on the language because of that – that is, until a fellow on some mailing list revealed a secret: You don’t read them, just like you don’t read the spaces between the words right now. Of course they’re helpful, just like punctuation characters are, in fact they are immensely useful. I realize this doesn’t sound very helpful to you. I am sorry.

    It is going to take you a while to get over the hurdle. For me it took 14 days, focusing on only the indentation, and where certain S-expressions went in the various programming constructions. That is 14 days not reading anything other than Lisp code, no other programming language at all, and no other programming books at all. When you reach a point where you not only accept S-expressions (including arithmetic expressions), but maybe even love them, and see their benefit – then you have reached that aha-level where they have become second nature. Of course this is in connection with the lambda-calculus which is the main engine and concept behind it all. That takes a lot of effort, more effort than reading the book, but you already know that of course. It is difficult to me to explain the joy and freedom from syntax, of never having to worry about precedence or what goes where, even for arithmetic, which I admit also took me awhile to swallow before I could read Lisp programs (faster than other the code of other programming languages I might add). Did you ever use one of the HP postfix calculators? Or try programming Forth? It is of the same nature. You might learn more easily with such a perspective. Mind you, a lot of it is just as much about habits and motor skills (in your programming editor: shift to Emacs and use Meta-(, Meta-], etc. before you go crazy. Vim also have a useful Lisp mode) as it is about learning new concepts.

    I think that is the hardest part about learning Lisp: You have to empty yourself almost completely void to really get it. There is something to be said about emptying the glass before filling it… It sounds so cliché, I know, but take a bit of grain from someone who’s been through it.

    The worst possible thing you can do is learn Elisp for the sake of syntax, and transcribing to Ruby to get along, going forth and back. Ruby (and Elisp) got some things incorrect (closures without lexical scope among other things), so I would stay away from it while you try to get Lisp under your skin. It’s not that dynamic scope is not useful, but you should learn the lexical scope by default first to avoid bad habits. Don’t worry if you don’t understand this yet. There is some fundamental things you won’t learn if you try to transcribe the Lisp concepts into a more familiar language to you. You can do that later, but only later. It’s too early for that. More importantly it is going to take you longer to learn Lisp – or to say it pessimistically: You’ll never grok Lisp and only learn to “hate” it, or have a sceptical attitude towards the language for the rest of your life if you don’t rid yourself of even some pretty common expectations – like arithmetic expressions (they are indeed a good example). This is true for all languages with paradigms away from the imperative main road of Von Neumann or Turing, be it Prolog, APL, or Forth, or …

    Dybvig’s book is excellent although I am not sure it’s the best for learning the language. A couple of my programming buddies claim it is. I myself learned from Structure and Interpretation of Computer Programs (easily one of the best programming books ever I have read – even better than The C Programming Language), and the books of Paul Graham and Peter Norvig (Common Lisp).

    I wish you the best in your endeavours. If you persist, I know it’s going to change your perspective on programming, a hard thing to do indeed.

    By the way: I did almost nothing but study Lisp 2 years afterwards before looking elsewhere again. For a while I was a small boy with the world’s best programming language and almost straight hostility towards any alternatives. I still use Lisp from time to time, although 2-3 dozen other languages are also interesting to me today. In fact Lisp is the main reason I became interested in Programming Language Theory in general. Lisp started me, just after that little Ruby :-))

  6. The reasons for lisp syntax are not obvious at the start and many proposals for infix lisp have been made. Once you delve deeper into the uses of lisp, the reasons and convenience will become clear. It really is worth it. Some of the reasons relate to functional programming but mostly they have to do with macros.

    Macros are a user defined type of special form. (Do Schemers really say ‘syntactic form?’ They mean ‘special form,’) A special form is just any kind of s-expression that starts with a symbol in the table of special forms. Some entries are in the table by default and others are added with macros. The compiler will process special forms differently from usual s-expressions, according to the definition of each special form. The key thing is that you can completely reprogram the compiler by defining your own special forms.

    Scheme is barely a functional language at all. It is a multiparadigm language — just like Common Lisp — that has built in encouragement for mountains of side-effects. Just try writing recursion on vector values and you’re downtown in side-effect city. Scheme does support functional programming without encouraging it too much and that is just fine.

    If you just wanted functional programming there’s Haskell. If your I.Q. is below the strictly enforced 185 minimum threshold for downloading Haskell, you could try Scala. But lisp is more than that. Functional programming plus macro metaprogramming is very powerful and everything else feels crippled after you learn it. And if you want that power, prefix notation and parentheses are the price of admission. The reasons why more obvious syntax would interfere with the combination become obvious with experience.

    Don’t skip the evolution by going back to Ruby. Deal with the Scheme way; it’s a required step on the path to enlightenment.

    The benefits of lisp, by the way, are most obvious on larger and more complicated projects. Lisp code size scales as something like the square root of Java code size; it’s not a linear advantage.

    Also, with Scheme you’re likely to be led to something like syntax-case macros; consider studying CL style defmacro (or define-macro) macros instead. They’re both simpler and more powerful. Paul Graham’s onLisp is available for download and printout as a guide (just the macro chapters, some of which use a bit of Scheme) and most Schemes have define-macro.

  7. Haven’t you heard the *real* meaning of LISP? Lot’s of Insipid Stupid Parentheses.

    No COBOL programmer outside of brain-dead academic courses has written MULTIPLY UNIT-PRICE BY QUANTITY GIVING TOTAL-PRICE since at least 1974. It’s:

    COMPUTE TOTAL-PRICE = UNIT-PRICE * QUANTITY.

    One of the insights I had soon after leaving academe is that “you’re not as awesome as they are” write really gnarled code and should be shot.

    Lastly keep in mind Kernighan’s Law: Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

    Nice, clear, unclever 3GL code will always be better than prefix lambda monad hoohah.

  8. Been there, done that: don’t do it. Just accept the steep learning curve and solve the exercises in Scheme.

  9. Tim Dellinger

    Do you recall the last time you had to use tricky For loop? Perhaps nested loops? Rewrite in functional style by mapping a function across a list. That was the ESR-style enlightenment for me. I never hated For loops until I learned the functional style.

  10. Mike– this is about Python, not to convert you but so you can keep up to date with the imaginary parallel you who is following the Python rather than Ruby track.

    Between those two disguised lisps, Python’s syntax is even less goofy, and list comprehensions in Python look so natural you almost forget that there ever was a language without them, or a way to do without them…almost.

    [ x**2 for x in [ 1, 2, 3, 4 ] ]

    [ a+b for a,b in pairs ]

    (Haskell’s list-comprehension syntax is pretty nice although inside it’s still so… Haskelly.)

    Python has closures, and a way to access abstract parse trees of functions, but no tail-call optimization (except in stackless?) or call-with-current-continuation. It’s by no means serious as a lisp, but I tend to slide easily into functional thinking when I’m using it. I think the default SICP at MIT is in Python right now.

    jneira –what’s objective about bad syntax is the amount of short-term memory and thinking time one has to spend on it. Also, by being more computer-readable than human-readable, I think it makes you think more about interpretation rather than plain meaning.

    (The defmacro link looks interesting.)

  11. Hello,

    I love functional programming, but I hate lisp syntax. No matter how hard I try, I can’t. I’m not saying it’s not good, general or whatever, I just can’t like it.

    That’s why I like erlang! Functional programming + erlang processes and lovely syntax. Win :-).

  12. Try having a look at ‘Structure and Interpretation of Computer Programs’.
    Online version is here:
    http://mitpress.mit.edu/sicp/full-text/book/book.html
    There are online lectures here:
    http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussman-lectures/
    Scheme/Lisp is introduced early on.

  13. Regarding the removal of set-car! — this was not a capricious change. PLT Scheme changed to use immutable cons cells by default, which has the advantage of encouraging a more functional style of programming. You can learn how to specify the use of mutable cons cells for your code, if you like, or you can learn how to do what you need to do with immutable ones.

  14. As a response to the primary issue you raise in this post — I think you are just getting overwhelmed by too much at once, and over-thinking the learning process. Yes, prefix math, and to a lesser extent, prefix code, looks a little weird. But believe it or not, after you’ve read and written a few hundred to a few thousand lines of it, your brain will adapt and it won’t bother you any more. I don’t think it would be a good idea for you to switch to Emacs Lisp or another functional language to get through your discomfort — this will only confuse you more. I think you just need to keep coding simple exercises and reading simple examples. If the exercises from the book of your choice are overwhelming you, pick some from How to Design Programs or Structure and Interpretation of Computer Programs — both freely available on-line.

  15. Thanks to all of you for the comments so far, especially the very detailed one from Dennis Decker Jensen. I have to admit that even in the time since I wrote this article (24 hours before posting it), the Scheme exercises are coming along noticeably more quickly. (Except when the darned language doesn’t have the primitives that the exercises demand, grrrr!)

    It’s interesting to see the high level of unanimity that trying to split the novelty (by either learning imperative-style Lisp or doing functional-style exercises in Ruby) is a mistake. For now, at least, I probably will stick with the exercises in the book.

    Regarding the remove of set-car! — the change may not have been capricious, but that doesn’t mean it was acceptable. This sort of backwards-incompatible change is a big, big deal, and should never be done without a much more compelling motivation than a desire for purity. To change the default behaviour — fine. But there ought at least to be a command-line option to restore the old behaviour, especially if it’s sufficiently core to be in the first 50 pages of books.

    SKalsi, SICP was my other main candidate book for this project; TSPL won out, despite the former’s very high reputation, simply because I was able to obtain a physical copy. Depending on how this goes, I may go on to work through SICP as well after I’m done.

    Rodman, Arc is a research project. No doubt a brilliant one, but there’s no way I’m ready to be involved in that at this stage. I need a solid, reliable Lisp dialect that’s not going to go changing under my feet while I am still learning the basics. (Kind of ironic, given the Astonishing Case Of The Disappearing Set-Car!, huh?)

    Tim Dellinger, I am already a convert away from for-loops — I don’t think I’ve used more than a couple since I switched to Ruby. collection.each { closure } is the way to go: it very quickly feels very natural.

  16. Regarding set-car! etc., you should read the DrScheme documentation on mutable pairs:

    http://docs.plt-scheme.org/reference/mpairs.html

    Basically, you need to import a library, and cons/car/cdr/set-car!/set-cdr! become mcons/mcar/mcdr/set-mcar!/set-mcdr! (and the non-mutable pair forms cons/car/cdr peacefully coexist with the mutable forms, though they are distinct). It’s annoying when trying to learn Scheme from a textbook, but it really is better. DrScheme is full of innovative stuff.

  17. I believe set-car! is still in some library.

  18. Emanuel Evans

    I know you discounted the “fancy-shmancy newfangled” languages off-hand, but based on your complaints, you really should give Haskell a look. It has the purity of Scheme (actually, more purity) with a *much* better and easier-to-learn syntax (I actually like the syntax better than Ruby), so you can focus on thinking functionally rather than counting parentheses. The advanced stuff gets very difficult–I’m in the process of struggling through learning monads–but the syntax is easily mastered in an hour or two. It should be easy to go back to Scheme or Lisp after learning Haskell. The only problem is the dearth of books on Haskell, but there are a few good ones (such as Real World Haskell).

  19. Just to reiterate the point a few others have made, I’d argue that with the profusion of relatively functional languages (and even more-functional languages like Haskell), learning a Lisp for functional concepts is a mediocre idea. What I recommend learning a Lisp for is /macros/ (also as mentioned above, any time someone uses the phrase “syntactic form” you can mentally replace this with “macro”, though at some point you want to come back and realize that the first name fits).

    Stick with it, maybe accelerating your sojourn into the macro-related chapters if you find yourself getting sick of the base syntax. That’s where it all becomes worthwhile.

  20. Thanks, Mike Vanier. I had actually already seen that page, and knew about set-mcar! (I should have said so); but, dammit, no, I want to be able to use the sample programs in the textbook as they stand, not have to port them.

    The least bad thing I have found so far is:
    (require r5rs)
    which causes PLT Scheme to make all conses mutable and reinstates set-car! and friends. Unfortunately, even with this import, it still maintains the typographical distinction between mutable lists (which are enclosed in {}) and immutable (which use () as normal). But since that is a difference that applies only to my output and doesn’t need to affect my programs, I guess that is good enough for doing the exercises.

    [A note to the worried: yes, I do see why immutable data structures are better -- in fact, that that was more or less the conclusion of my PragPub article, Tangled Up in Tools -- and I don't plan to use set-car! in actual Scheme programs once I reach the stage that I am writing them. It's just a matter of getting through the already-demanding tutorials and exercises with minimal additional and avoidable awkwardness.]

    Emanuel Evans, I am attracted to Haskell; but one thing at a time! (I say one: already I am learning Rails and Scheme at the same time, plus installing and configuring an EPrints archive; but, at least, let’s keep it to three things at a time. Oh, and I plan to review every episode in this series of Doctor Who, so make it four things :-))

  21. In the course in which I learned Scheme, we used Programming Languages: Application and Interpretation and the more well-known How to Design Programs. I obviously leaned more upon my instructor than the texts, but from what I read of them, they seemed to do a good job of introducing Scheme to those unfamiliar with its concepts (or rather, teaching programming languages, using Scheme).

    I found that the most difficult part, for me, was trying to figure out just *how* exactly to do things, especially since I could not for the life of me grok the PLT Scheme reference, which seemed to make perfect sense to my professor. Oh well.

    Oh, and I really hated DrScheme, mostly because the keybindings are very emacsy, and I’m a vi-guy. :)

    Don’t get discouraged, as it seems that this is one of those things that just suddenly “makes sense”, and there’s no way to predict when that’ll happen.

  22. I agree with some of the others here, you might should give Haskell a try. One of the great things about Haskell is that you can switch infix operators and post-fix operators very easily.

    I learned Scheme in college and absolutely hated it in every class I used it. My last semester I took a class that used Haskell and that’s when the whole functional programming thing finally made sense to me.

  23. I think you’re getting hung up on a lot of different things here.

    If you don’t like how PLT Scheme works, use a different implementation. There are gobs of them; just pick a different one.

    Work through the syntax. Here’s what you should remember: Every other language has massively complicated, confusing syntax. Take a look at this loop in C:

    for (i=10; i-->0;) {
        printf("%d", i);
    }
    

    At first glance, it looks invalid, then you think about it and understand how it works. This is syntax, and it’s complicated and confusing, it has funny edge cases that let you do strange things.

    And then there’s Duff’s Device, a syntactic perversion of epic proportions: http://en.wikipedia.org/wiki/Duff's_device

    Now compare this to Lisp syntax. There are a handful of literal data structures (strings, lists, vectors, maps). There’s quote, ampersand (&rest in some Lisps), colon (for keywords), semicolon for comments, and that’s about it. Unquoted lists are evaluated as function calls. I might be forgetting something, but this covers the vast majority of the syntax of Lisp.

    And the benefit here is that you can stop thinking about syntax. It’s so simple, it just fades into the background, letting you focus on the solution to your problem, rather than the constraints of the language.

    No Lisp programmer “counts parens.” Ever. If your editor doesn’t handle that stuff, you’re doing it wrong.

    Stick with it. You’ll thank yourself later.

  24. 1. About learning — the book that you chose might be too heavy as a way to learn the language from scratch, especially if you’re not even too familiar with functional programming. Perhaps something like HtDP would be more fitting — it’s definitely easier, but does some good teaching of functional programming and general design, and it is something that you could do in one pass.
    2. About writing Scheme syntax — yes, it’s a problem for people who are too used to infix syntax. But there are several things that you can do to make things better. First, choose some good Scheme oriented editor — one that does more than just balance parens. For example, in both Emacs and in DrScheme, you could get used to a mode of work where you always hit Alt when you open a paren (or a double quote, etc) — this will make the editor always insert a balanced pair of parentheses, which means that you don’t have to count parens, and you don’t have to stare at the flashing parent to know how many you need to close, you just *always* work in a fully balanced context. Combine this with an editor that auto-indents your code (again, both Emacs and DrScheme will do it for you), and you’ve got yourself an environment that will soon make parens disappear from your conscious. (And that *is* how Lisp/Scheme programmers work: we just don’t see the parens anymore, we’re only aware of the structure of the code.)
    3. About mutable cons cells — well, that deserves three entries.
    3a. No, that’s absolutely not about preventing you from shooting your own foot. Besides mutable pairs, PLT has mutable structs that you can define, and an array of other built in types that rely on mutation in some way. You’re thereby free to shoot your foot — multiple times in multiple styles. The problem starts when you need to write library code that deals with lists. This is when things quickly get un-funny and un-fun. For example, for the vast majority of list-related Scheme and Lisp code that you’ll find, there’s always an implicit assumption that the client code is well behaved — for example, in a loop that cdrs down a list, you almost always assume that the links don’t change underneath you, and that chasing the cdrs you will eventually reach the end of the list (rather than get sucked into a cycle). Why is this bad? It’s bad because you might be used some library A that gets broken because you’re also using library B where its author decided to violate some of these assumptions — and the result is a mess that is extremely hard to debug. For a very concrete example of this, look for “seg fault” in that blog post — this happened also on Chez, a highly respected implementation that was implemented by the author of your book. This is of course fixable, but in a way that makes things slower in various way — and all of that is for a feature that is almost unanimously considered bad practice. (If you ask around, you’ll see that basically the only reason people want to keep it is for legacy code. Otherwise it would probably have been gone by now.)
    3b. A separate subject is whether you *want* to use it. There are some cases where you certainly want a mutable pair. For example, there are uses of a pair as a simple container for a value that you can change; and there are uses where you want to implement a kind of a dictionary that associates values with mutable values. PLT has `box’ — a single-mutable-value container which you can use in almost exactly the same way that you use a pointer in C — and that covers a good number of cases that usually use the mutability of cons cells. There’s also a (mutable) hash table, parameters, and mutable structs, but just boxes would get you a long way. But seriously, the question is still whether you want to use it. You have this big challenge of learning a new language and style and paradigm — the concept of mutation in all of this is relatively minor and certainly one that any brace-loving hacker (who often complains about sexpr syntax) is very familiar with. It’s easy to get to the same kind of games that you do in other languages when you deal with pointers. For example — represent a matrix as a list of lists, now write a function that transposes a matrix. This is a very cute one-liner in Scheme, where you get a new transposed matrix. Now change the requirement just a bit — you need to transpose the matrix *in-place* — only by changing cars and cdrs — no new pairs allowed. The result of this is going to be a day or two in pointer hell, and a result that is far *far* from a cute one-liner. And I’m serious about doing that — it’s horrifying enough to get the pointer habit out of the system… (The first time I did it, I spent most of the time reflecting on its utility as an exercise — my conclusion was that other than torturing students, this is an absolutely pointless exercise in masochism.)
    3c. Finally, if you *really* want to use… then there are several ways that can get you there in PLT. First, there is the r5rs and r6rs executables (plt-r5rs and plt-r6rs) — those will give you a language that is defined by the respective standard. Second, there are corresponding language levels in DrScheme that you can use — and in those levels lists are mutable. (Yes, they print with braces, but that’s because of other factors.) Third, you can use the default lanugage (the one called “Module” up to version 4.2.4), and use `#lang r5rs’ at the top, which will allow you to work in that same context. Fourth, as Mike mentioned, you can just use m— names for mutable stuff and otherwise work in the default language. And finally, for really extreme cases of mutation-addiction, you can use the foreign interface to expose a `set-car!’ and a `set-cdr!’ pair of operations — it violates an assumption that the compiler makes, but hey, if you want to shoot yourself in your foot, then at least do it with style. In any case, if you’re interested in that — it’s a small hack that I posted once which is now used to make Arc (PG’s language) work on a recent PLT scheme. If you can’t find it mail me and I’ll email it. (It’s no more than 5 lines, IIRC.)

  25. Several years ago i had a sysadmin friend who started drinking the lisp koolaid. He was raving about closures, lambdas, continuations, lack of syntax, etc.

    At that point I had still been programming several years and none of what he said made sense. I tried out lisp and couldn’t grasp anything but the most trival examples and eventually lost interest/gave up.

    Some time later I discovered Ruby, after years of PHP it seemed pretty out of this world, but not so far gone that I couldn’t wrap my head around it. As I used it more and more I gradually picked up blocks/closures, lambdas and all that. When I finally turned my attention back to lisp, so much suddenly made sense. Ruby taught me functional concepts without me even realizing it at the time.

    Now I actually write quite a bit of lisp/scheme and still think Ruby was a good mid-point towards my learning. I just recently got over the prefix notation and had the same hangups as you. It just takes time to get used to it. After you deal with it enough it becomes second nature.

  26. Hey, thanks for explaining closures with Ruby in that linked post. In return, I’d like to explain list comprehensions. I like Python because it explains list comprehensions.

    So what is a list comprehension?

    Definition: List comprehensions are functional programming tools (specifically map and grep, with lambdas) for SQL users.

    You know what a SQL query looks like, right? For example:

    SELECT table.element FROM table WHERE table.anotherelement == 42

    As a list comprehension, that becomes:

    [ row.element for row in table if row.anotherelement == 42 ]

    Great! So how does this relate to functional programming? Well, both statements perform a “grep” (filter in Python), and they sorta perform a map too. Plus there are implied lambdas in both. Let me explain by breaking things up. I’ll start with grep:

    Say “file” is a list of lines, and you want all lines that begin with “42″. This is the canonical problem for grep. In a Linux shell, you’d say:

    grep “^42″ file

    I’m not a SQL guru, but in SQL you’d say something like:

    SELECT line FROM file WHERE line LIKE “42%”

    In Python, you could say something like:

    filter(lambda line: line.startswith(“42″), file)

    (Ugly cuss, ain’t it?) Or you could say:

    [ line for line in file if line.startswith("42") ]

    I think that looks nicer, if a little redundant with “line”. But that redundancy disappears when you use list comprehensions the other way, as lambdas.

    Now, suppose we want to remove “42″ from the beginning of those lines. In SQL, you’d say:

    SELECT MID(line,2) FROM file WHERE line LIKE “42%”

    This is apparently known as a “scalar function”. Now, in Python, you could say:

    map(lambda line: line[2:], filter(lambda line: line.startswith(“42″), file))

    YUCK! Or you could say:

    [ line[2:] for line in file if line.startswith(“42″) ]

    Of course for some queries in SQL you don’t need the “WHERE” clause. And for some list comprehensions you don’t need the “if” clause. To chop the first two characters off every line as above, in SQL you’d say:

    SELECT MID(line,2) FROM file

    Or in Python I’d say:

    [ line[2:] for line in file ]

    Because I’ve had enough of lambdas, haven’t you?

    P.S. Can closures return a value? If so, replace lambda with closure above and it works even better.

  27. Many thanks, Eli, lots and lots of good stuff there, where I really appreciate. And I hope that others who are going down the same path as I am will also find it helpful. For now — and only for now, while I am working through set-car!-related exercises, I think that the #lang r5rs solution is nicest. But I faithfully promise to wean myself off of mutable lists for the purposes of my own programs!

    Here is an (admittedly lame) example of where an exercise wants me to use set-car! and I can’t yet see what the nice, clean solution is. Exercise 2.9.3 invites me to modify the stack object (which was built earlier in the chapter) to add ref and set-ref! messages. It says “[Hint: Use list-ref to implement ref and list-tail with set-car! to implement set!.]”

    Here’s my solution, which works with (require r5rs)

    (define make-stack
      (lambda ()
        (let ([ls '()])
          (lambda (msg . args)
            (cond
              [(eqv? msg 'empty?) (null? ls)]
              [(eqv? msg 'push!) (set! ls (cons (car args) ls))]
              [(eqv? msg 'top) (car ls)]
              [(eqv? msg 'pop!) (set! ls (cdr ls))]
              [(eqv? msg 'ref) (list-ref ls (car args))]
              [(eqv? msg 'set!) (set-car! (list-tail ls (car args))
    				      (car (cdr args)))]
              [(eqv? msg 'debug) ls]
              [else "oops"])))))
    

    It’s obvious how the other non-readonly operations, push! and pop, can work by setting a new value for the local variable ls, but I can’t see a neat and efficient way to make a new version of the list with the new value stuck into the middle — that is, I can’t see a clean way to build such a list using cons/car/cdr.

    Is it obvious? How should I be doing it?

  28. Thanks, Ken, for your SQLtastic exposition of list comprehensions. I won’t give it the detailed response that it deserves right now, because I plan to write an article on this subject; when I do, I will link to your comment.

  29. Well, your code looks like it’s doing what it’s supposed to do — and yes, it is an example where boxes fit nicely. (By the way, some people would claim that the kind of mutation that you’re using there is not too bad — in some sense using `set-car!’ is better than using `set-cdr!’ since it doesn’t modify the structure of the value if it is viewed as a list.)

    Instead of trying to figure out how to write code in a blog comment, I just dumped a file at http://tmp.barzilay.org/stacks-etc.scm with a few versions of your code.

  30. Thanks, Eli, it’s really helpful to see these successive versions … even if I lost my thread (for now) by the third version. I will return to it in due course.

    Here’s the code, direct inline, for posterity. (You can best paste code into WordPress comments by enclosing them in {source language=”WHATEVER”}…{/source}, but using square brackets rather than curly. The list of supported languages doesn’t seem to include “lisp” or “scheme”, but since Lisp has minimal syntax and therefore requires minimal highlighting, just omitting the language=”WHATEVER” bit seems to work OK.)

    ;; original example, slighly modified
    (define (make-stack)
      (let ([ls '()])
        (lambda (msg . args)
          (cond [(eqv? msg 'empty?) (null? ls)]
                [(eqv? msg 'push!)  (set! ls (cons (car args) ls))]
                [(eqv? msg 'top)    (car ls)]
                [(eqv? msg 'pop!)   (set! ls (cdr ls))]
                [(eqv? msg 'ref)    (list-ref ls (car args))]
                [(eqv? msg 'set!)   (set-car! (list-tail ls (car args))
                                              (car (cdr args)))]
                [(eqv? msg 'debug) ls]
                [else (error "oops")]))))
    
    ;; converted to using boxes
    (define (make-stack)
      (let ([ls '()])
        (lambda (msg . args)
          (cond [(eqv? msg 'empty?) (null? ls)]
                [(eqv? msg 'push!)  (set! ls (cons (box (car args)) ls))]
                [(eqv? msg 'top)    (unbox (car ls))]
                [(eqv? msg 'pop!)   (set! ls (cdr ls))]
                [(eqv? msg 'ref)    (unbox (list-ref ls (car args)))]
                [(eqv? msg 'set!)   (set-box! (list-ref ls (car args))
                                              (cadr args))]
                [(eqv? msg 'debug)  (map unbox ls)]
                [else (error "oops")]))))
    
    ;; more tweaks, but basically the same
    (define (make-stack)
      (define ls '())
      (define (a-stack msg . args)
        (case msg
          [(empty?) (null? ls)]
          [(push!)  (set! ls (cons (box (car args)) ls))]
          [(top)    (unbox (car ls))]
          [(pop!)   (set! ls (cdr ls))]
          [(ref)    (unbox (list-ref ls (car args)))]
          [(set!)   (set-box! (list-ref ls (car args)) (cadr args))]
          [(debug)  (map unbox ls)]
          [else (error "oops")]))
      a-stack)
    
    ;; some plt-specific tweaks
    (define (make-stack)
      (define ls '())
      (define (a-stack msg [arg1 #f] [arg2 #f])
        (case msg
          [(empty?) (null? ls)]
          [(push!)  (set! ls (cons (box arg1) ls))]
          [(top)    (unbox (car ls))]
          [(pop!)   (set! ls (cdr ls))]
          [(ref)    (unbox (list-ref ls arg1))]
          [(set!)   (set-box! (list-ref ls arg1) arg2)]
          [(debug)  (map unbox ls)]
          [else (error 'stack "unknown message: ~e" msg)]))
      a-stack)
    
    ;; Now this is really going overboard -- use a syntax to make up the
    ;; `make-stack', but could be used for other similar "class"
    ;; definitions.  It's pretty limited in the kind of "classes" that it
    ;; can define, and it's well beyond that stage of the book; but you can
    ;; take it as a quick demonstration of why exactly the s-expression
    ;; syntax is so popular.
    (define-syntax-rule
      (define-class-like (maker [field init] ...)
        [(method x ...) body ...] ...)
      ;; the above pattern is translated to the one below
      (define (maker)
        (define field init) ...
        (lambda (msg . args)
          (case msg
            [(method) (apply (lambda (x ...) body ...) args)] ...
            [else (error 'maker "unknown message: ~e" msg)]))))
    (define-class-like (make-stack [ls '()])
      [(empty?)   (null? ls)]
      [(push! x)  (set! ls (cons (box x) ls))]
      [(top)      (unbox (car ls))]
      [(pop!)     (set! ls (cdr ls))]
      [(ref n)    (unbox (list-ref ls n))]
      [(set! n x) (set-box! (list-ref ls n) x)]
      [(debug)  (map unbox ls)])
    
    ;; BTW, `define-syntax-rule' is a kind of a short form for
    ;; `define-syntax' and `syntax-rules' -- it can easily be defined using
    ;; them:
    (define-syntax define-syntax-rule
      (syntax-rules ()
        [(define-syntax-rule (name P ...) B)
         (define-syntax name
           (syntax-rules ()
             [(name P ...) B]))]))
    
    ;; (At this point, it should be clearer why the Scheme/Lisp world is so
    ;; full of incompatible dialects: a typical Scheme (PLT in particular)
    ;; is more than a language -- it's a playground for extending languages
    ;; and making up new ones.  (Yes, including list-comprehension...))
    
  31. Apologies for mentioning yet another language, but this seems to be relevant to the notion of “If you want macros, you need Lisp syntax”.

    Dylan (http://www.opendylan.org/) claims to provide macros while keeping a more familiar syntax. Never tried it myself, though. I somehow got bored with the whole enlightenment-through-programming-languages thing.

  32. Hi Mike,

    TSPL4 is using R6RS Scheme which doesn’t include ‘set-car!’ in the initial environment. Here is how to load it:

    #!r6rs
    (import (rnrs)
    (rnrs mutable-pairs))
    (define a ‘(1 2 3))
    (set-car! a ‘n)
    (display a)

    (http://www.r6rs.org/final/html/r6rs-lib/r6rs-lib-Z-H-18.html)

    btw: The pound-bang makes PLT use the R6RS language.

    If you want to go with R5RS you might start with TSPL3:

    http://www.scheme.com/tspl3/

  33. Hi Mike,

    It sounds like you are in the same boat that I was in with Scheme: you are getting outside your comfort zone! You are in a critical period right now. This is the time that you really need to stick with it without reverting to what you are already comfortable with (Ruby). You should sign up to some Scheme mailing lists [1] and hang out on Freenode’s #scheme so you don’t feel so alone and isolated right now.

    It sounds crazy now but after two weeks you won’t see the parentheses anymore. The simplicity of the syntax is pretty nice; but once you grok macros you will really appreciate it.

    As to your proposed approach it sounds like you will take 3x as long to reach your goal and that results in 2x less time spent hacking with Scheme! TSPL is great if you can slog through it.

    Learning Scheme is really a blast.

    [1] http://www.wisdomandwonder.com/scheme

  34. All of the Scheme email lists are great and the PLT list is one of the best. You should join it.

  35. Pingback: Top Posts — WordPress.com

  36. martin wickett

    +1 for the ‘little schemer’ series- ‘the reasoned schemer’ is one of my favourite programming books.

    Would also recommend ‘teach yourself scheme in fixnum days’ which is a nice little introduction to scheme (you can then go on to the details by reading SICP & TSPL):

    http://www.ccs.neu.edu/home/dorai/t-y-scheme/t-y-scheme.html

  37. Lisp was originally designed as an interpretive language that could interpret itself. I think McCarthy at Stanford threw down the challenge. Interestingly, the idea in the first version was that programs would be written in M-syntax which looked more program like than lists, but the “reader” would convert it into s-expressions which were just lists of conses. As it turned out, M-syntax vanished rather quickly and Lisp developed with its parenthetical syntax.

    Early versions of Lisps used a variety of binding techniques, but closures were not implemented in any consistent manner. On the other hand, one could build a variety of program systems for Lisp like languages that would implement advanced control structures. Planner, the real world solution and assertion management component of Winograd’s natural language block world system looked a bit like Lisp but was more like an early version of Prolog. Exploring these control structures led to innovations in the Lisp language.

    Meanwhile, MIT decided to started to take computer science more seriously. There were computer science courses, and even a major, but no one was sure of what should be taught, especially about computer languages. So, you had a number of teaching languages including subsets of PL/I and various block pseudo-langauges for dealing with complex, and often parallel, algorithms. Professor Sussman decided that there was a reason to teach computer science as opposed to toaster science, in the 20th century. After all, 19th century steam engine science became thermodynamics, a field in its own right.

    So, he invented Scheme as a language for teaching about languages. That’s why a lot of people recommend SICP. It teaches about language structure first, Scheme second.

  38. Hi, Michelle, thanks for that link. It looks very interesting and relevant. I’ll link to it from the next installament of TLOSAaL.

  39. Aaron Davies

    On an only vaguely related note, have you looked at APL or any of its descendants? I work in Q, a direct descendant of J, blended with some functional elements taken from Scheme. For a nice example, see this, where Q’s carefully chosen primitives lead to dramatically shorter code than LISP.

    I’ll admit right up front that Q isn’t (yet) a full-blown functional language in the sense of Scheme (let alone in the sense of Haskell), lacking as it does closures, lazy evaluation, etc., but I find it more than powerful enough already.

  40. Aaron, I admit I’ve not looked at any of the APL-like languages. I did do a short course in APL as part of my degree, but that would have been, er, about 22 years ago. I remember being both infatuated and appalled by it, and I am guessing that that combination of emotions is not an uncommon reaction :-) Without a doubt, APL lends itself to fantastically concise solutions to certain kinds of problems, but the resulting code is so line-noisy that it makes Perl look like Pascal in comparison. Even if I had the keyboard, that’s not a rabbit-hole that I’d want to go down.

    (For those who’ve not followed the links, here is some sample Q code from Wikipedia:

    (!R)@&{&/x!/:2_!x}'!R
    

    I mean to say, what? :-)

  41. Aaron Davies

    That’s K, not Q, and the obsolete K3 at that. In Q, that would read “(til R)where{all x mod’2_til x}each til R”. One of Q’s main design goals was to make the syntax a bit more bearable. (The other was to better integrate the database part of K.)

  42. I second the suggestion for reading ‘teach yourself scheme in fixnum days’. There is even a pdf or postscript version laying around that you could print.

    Once you’ve seen enough Scheme to be tempted to look at other dialects, I’m sure you would appreciate this screencast, which teaches how to implement the world smallest MUD in a functional programming style 8)
    http://peepcode.com/products/functional-programming-with-clojure

  43. Jaime Moreno

    I never understood why Logo wasn’t more popular since it’s basically Lisp without the parentheses?

  44. Pingback: Git is a Harrier Jump Jet. And not in a good way « The Reinvigorated Programmer

  45. Hi,

    Maybe Genyris would suit you – it’s like Scheme but used Haskell/Pythin style indentation instead of (()). It’s also like SmallTalk so (set-car! ..) is an assignment to ‘.left’:

    > define a ^(1 2 3)
    (1 2 3) # Pair
    > a(.left = ^N)
    N # SimpleSymbol
    > a
    (N 2 3) # Pair

    see: http://www.genyris.org/

  46. Pingback: The Little Schemer | Bookmarks on Programming and Music

  47. Upon infix vs prefix syntax, I initially had similar thoughts as yours. But I radically changed my mind for several reasons.
    I once read a study about usual arithmetic syntax and the issue of precedence. The article also had some relevant statistics (from a corpus that was, IIRC, the C stdlib, and their own experiments). First, there are rather few operations in a typical piece of code; and expressions involving more than one operation (a+b*c) are very rare. This opens the question of whether it is worth making a syntactic exception for operations. More, the study showed that even trained programmers made errors even in writing simple operations using common arithetic syntax; one issue is that precedence is totally arbitrary, in the sense it has no logical justification.

    For a toy language project of mine, I use a variant of prefix form, actually the usual “func call” syntax:
    +(a b)
    instead of
    (+ a b)
    or
    a + b
    This has several advantages. First, the pattern is identical to ordinary “routine” call (actually method invocation in my case). And indeed, +(a b) is kind of syntactic honey for
    type : a.type
    type.sum(a b)
    Operations (builtin and user-written) are type methods that operate on operands of this type (with possible conversion by the method, indeed, such as int to real). I find view this conceptually satisfying, especially compared to the ordinary one in OO that a+b means a.add(b). Wrong, for me, a+b is *not* the application of a method on object a with parameter b, rather both a & b are operands.
    In addition to syntactic and conceptual points, this also allows generalizing binary addition to an n-ary operation, using the same syntax, semantics, and even implementation. This is why the method is called “sum”. One can write:
    +(a b c)
    as well, and indeed:
    + numbers
    The same applies to product, and even more trivially to operations that basically apply on a set or sequence like min/max, average & other statistic values,…
    One apparent drawback is that a+b*c then reads
    +(a *(b c ))
    which is seemingly verbose & complicated, like in Lisp. But this is (1) rare (2) unambiguous (3) regular. Actually, this form has the huge advantage, for me, to encourage simple code vs complicated expressions (as are common eg in the C culture), by writing intermediate expressions into temp named variables. Clarity before all ;-)

  48. PS:
    [rant topic="markup-lang"]
    I really find annoying blog and wiki languages that do not respect whitespace, esp indentation. This is regarding the user as stupid. I mean these leading spaces; let me say what I mean, please!
    [/rant]

  49. spir, that all makes sense: in fact, the single strongest argument for not worrying about prefix notation is simply that, as you point out, it’s pretty rare to come across code that contains complex expressions.

    Writing intermediate expressions into temporary variables, though … I agree that in the name of clarity, this is often a good technique; but it’s one that is strongly discouraged by most functional languages, and indeed flat-out prohibited by many of them. It’s the combination of both no temporary variables and unfamiliar complex-expression syntax that’s the killer.

    P.S. I totally agree on your post-script rant. Especially on a programming blog. If there was a setting for respecting whitespace in comments, I’d turn it on immediately.

  50. I globally agree with your reply, Mike.
    On the point of clarity, the context is different since my language is mainly imperative, stateful, etc… (actually totally OO). So, defining an intermediate var is about nothing, it’s just what we do all the time… but less in some programming cultures valuing cleverness at the cost of clarity ;-)
    Now, I understand your view about this from a purely FP. And yes, FP programming will define the return value in a single exp. Some rare programmers will *present* it in a way that lets appear sub-expressions as clearly as possible.
    This also has a cost. For instance, quicksort in a typical, single expression, FP func requires traversing the seq twice (once for building each subseq, indeed) or even “thrice” (with the pivot). While intermediate vars for the subseqs can be built in a single traversal ;-) — python example code (quicksortly tested)

    def QuickSort(seq):
    	if len(seq) < 2:
    		return seq
    	pivot = seq[0]
    	seq1 = [] ; seq2 = []
    	for element in seq[1:]:
    		if element <= pivot:
    			seq1.append(element)
    		else:
    			seq2.append(element)
    	return QuickSort(seq1) + [pivot] + QuickSort(seq2)
    
  51. Pingback: Tagore Smith on functional programming | The Reinvigorated Programmer

  52. Hi Mike – you may have heard that Carnegie-Mellon has reworked its introductory comp. sci. curriculum, deprecating object-orientation and emphasizing functional programming, together with abstraction, verification and parallelism. The lecture notes and other material for the functional programming course are available online and very readable, so it might be just the thing to reinvigorate your interest in the subject (it is working that way for me.) The only downside (from your point of view) is that it uses ML, but given your travail with Scheme, that might be an advantage. ML and its descendants (Miranda, Haskell, OCaml…) seem to have reinvigorated functional programming, so at least take a look…

  53. Pingback: My plan for 2012: do things that children do | The Reinvigorated Programmer

  54. First. Yes. There are slightly more parentheses, it is slightly more ugly in the (imo rather rare for general programming tasks) case where you are using operators that are normally infix. So you lose that little bit of visual elegance. But what do you gain?

    > The precdence of ‘&’ in C/C++ just cost me another hour. It’s probably cost me a cumulative week of debugging time in the last quarter century.
    http://www.dadhacker.com/blog/?p=1981

    No. More. Precedence. Rules. At all. They are all gone. You never have to read one of those bloody precedence order lists again. Now wasn’t that worth it?

    And as a nice little bonus you get easily machine parseable and generatable code. No biggy.

  55. Pingback: Answering 25 tough interview questions, part 2 | The Reinvigorated Programmer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s