Thanks to all for the many thoughtful and insightful comments on Whatever happened to programming?
Stupidly, I didn’t realise that comments would be held for moderation, so it’s only this morning that I went through and OK’d them all (even the ones that say I’m just whining :-)). There are too many to respond to individually, but since some consistent themes emerged, I wanted to follow up on some of those.
Popular comments, distilled
1. Poor you, your job must suck!
Sorry, I realise I didn’t really explain this properly. In truth, I am lucky enough to have one of the relatively few jobs out there that really is largely programming. I work for a small open-source software house, and what we mostly do is make tools. So, actually, no, my job doesn’t suck. (Er, except maybe right now, when the dice happen to have fallen in such a way that I am in the middle of my third consecutive documentation project.) I was aiming for a lament at the state of programming in general.
And of course since my employer actually creates and distributes libraries and web-service components, we have an obligation to make them simple and pleasant to use — to avoid leaving people with Library Fatigue. You might say that our goal is to provide people with libraries that Just Work, and that provide narrow enough interfaces that you don’t need a lot of glue.
2. But you wouldn’t want to have to write printf()!
No indeed. I certainly didn’t mean to suggest that there is no place for libraries! It’s great to have a set of existing functionality that we can call on whenever we need it. The problem comes when all you do is glue libraries together. To put it another way, libraries make excellent servants, but terrible masters. Somewhere along the line, we let the libraries take over the asylum. Not good.
Here’s a rule of thumb (which, like all such rules, is often broken and should not be taken too seriously): beware of anything that calls itself a Framework. Anything that, instead of providing stuff that you can call, takes over the wheel and tells you what code to provide for it to call. Not always, but often, that marks the line where this stops being fun.
In my experience of programming with frameworks — and this goes all the way back to writing Windows 2 GUI applications in raw C with the aid of the appalling Microsoft resource compiler — they do keep their promise of making things very quick and easy … so long as you do things in exactly the way the framework author intended. The moment you go off-piste, everything goes Pete Tong, and what should be simple is suddenly not merely hard but often impossible. The framework goes out of its way to prevent you from doing what you intended. Strange side-effects hamper your efforts. The hook that you need to get the functionality you want isn’t there. Your attempt to work around it causes something else, apparently unrelated, to start misbehaving in subtle and unpredictable ways. Congratulations! You are now suffering from Framework Fever! Doctor Taylor prescribes a period of complete rest before building up your strength by working through the exercises in K&R.
(Again: this is not always true of all frameworks. I hope from the bottom of heart that when I start using Rails, it will prove to be among the exceptions. But my experience so far does not make me optimistic.)
You know the real problem with frameworks? They demo too well. Someone shows you their favourite framework and demonstrates how you can build 50% of your application in half an hour! Great! That other 50% can’t be hard, can it? But it turns out that what looked like 50% is actually 5%, and filling in the other 95% gets exponentially more difficult as you approach the 100% mark. Frameworks are great for building toys, and that fools us — again and again — into assuming they’re good for building products.
3. But libraries help us do things more efficiently!
If only it were that simple. In the Reddit discussion on this article, Silversmith commented:
You have problem X, consisting of sub-problems X1, X2, X3. There are readily available solutions for X1 and X3. If you don’t feel like being a bricklayer, code X. I choose to code X2, plug in solutions X1 and X3, and spend the rest of the day investigating and possibly solving problem Y.
I admit this sounds great. Hut here’s the fallacy: we all assume (I know I do) that “plug in solutions X1 and X3” is going to be trivial. But it never is — it’s a tedious exercise in impedance-matching, requiring lots of time spent grubbing around in poorly-written manuals that tell you everything the code already told you (because it was generated with JavaDoc or Rdoc or whatever), and none of the high-level stuff that you actually need to be told. So maybe my initial complaint was not really so much “whatever happened to programming?” as “why do so many of our libraries suck so much?”
The good thing is that (again, as others have pointed out), we are not just pawns in this game. We are, if not kings and queens, then at least knights and bishops. We are not, or don’t need to be, passive consumers of others’ libraries, but also makers of libraries for the benefit of others as well as ourselves. We can and should raise the bar. We need to make every effort to design our libraries in such a way that the programmer is in control and the library is his or her servant. I think that’s so important that I am now going to go and have it tattooed on the insides of my eyelids, so that it’s always before me when I’m sleeping.
And folks, let’s write useful documentation. I don’t need a 900-page document that is a recapitulation of the method signatures. I need a three-page statement of what the library is for, what shape it is, how the bits fit together, why it uses italics so much and where to start.
4. You should be using Ruby instead!
Seriously, I absolutely agree that different languages, with their different expressive power and especially their different culture, yield very different experiences. Especially, I have learned that anything that has “Enterprise” in its name is so incredibly boring that the people who use it had to shove the name of the Star Trek ship into its title just to keep themselves awake. (I am convinced that this is the case.) Better languages and better environments make for a better and more programmingy programming experience. I’m in favour of that.
The conclusion of the matter
What chills me to the core is this comment from Jesse:
I frequently create custom business and ecommerce applications, and I hardly ever get to do any copy/pasting. There’s always a custom report to create, a custom csv to parse, or a custom xml system to integrate..
That, right there — that’s what I’m afraid of.
Eowyn: I fear neither death nor pain
Aragorn: What do you fear, my lady?
Eoywyn: A cage.
Lord deliver us from a career of creating custom reports, parsing custom CSV and integrating custom XML systems.
I’ve just found the discussion of the original Whatever happened … at Hacker News. Lots of interesting perspectives. And especially this, from jdietrich, which captures exactly what I’ve been trying to say:
Back in ye olden days, most programming tasks I performed felt quite natural and painless, just a quiet little chat between me and the compiler. Sometimes longwinded, sometimes repetitive, but I just sat and though and typed and software happened. The work I do these days feels more like being a dogsbody at the tower of babel. I just don’t seem to feel fluent in anything much any more.
We talk about ‘flow’ quite a lot in software and I just have to wonder what’s happening to us all in that respect. Just like a conversation becomes stilted if the speakers keep having to refer to their phrasebooks and dictionaries, I wonder how much longer it will be possible to retain any sort of flowful state when writing software. Might the idea of mastery disappear forever under a constant torrent of new tools and technologies?
That’s it exactly.
I just posted a “gist of the discussion” comment (http://j.mp/c4HCvD) at your “Whatever happened to programming” post (http://j.mp/cJJkYK) only to realize you already have this follow-up post :)
Anyways, on reading this follow-up post, although I know it’s normal of us developers, but you seem to have contradicted yourself in this post.
Let’s accept it, we as programmer hate writing documentation. And then we gripe about libraries/framework that don’t have adequate documentation. And I believe this is not going to change (For most of the libraries, at least).
In the end, the comment you quoted form Hacker News has captured the essence of current state of software development, and I guess the soul of your original post.
I can only wonder, where we programming will stand within the next 5~10 years.
In short I could not disagree more with you on this point.
I enjoy writing compilers, picking at bits, implementing libraries. I also enjoy building things (normal) people care about. It seems what you’re lamenting is not the ability to _make_ but the fact that you don’t get to do so in the deepest, darkest, pits of programming. My response? DO IT! Get a job with VMware, Apple, or Microsoft working on dark kernel magic. Work for Microsoft on C#, F#. Contribute to Mono. Hack at the linux kernel, or write your own!
The things you mourn are not gone, they’re just not as prevalent because the industry has grown past needing to focus on them. If everyone spent their time reimplementing the wheel what would the state of the art be today? It certainly wouldn’t have the breadth of applications it does now. What about desktop computing? Can you imagine if, instead of needing to be competent in four languages (C, C++, Java, .NET of choice) you had to cover all those plus the 827 home grown variants to stay relevant?
All in all the growth in libraries (and frameworks) seems to indicate not a death of programming but an evolution. Yes, choosing a library will mean you give up some amount of freedom but if the alternative is first building that library THEN building your application… well, I just don’t see the problem. If it was so easy to build in the first place these frameworks you dislike so wouldn’t have such a foothold.
It sounds as if your user Jesse finds room for creativity in building custom reports an ecommerce apps. Who are you to say that his joy is insufficient because it’s not “pure” enough? If a physicist were to complain that he never gets rederive the facts of classical mechanics I would look at him like he was crazy — that was done originally so that we could make progress beyond them.
A clever fellow once found some research that said it took ten years to become an expert in many areas. I’m not sure why we expect programming, as a maturing field, to be any different. Yes, building an application thet involves jQuery CSS, Django, and a murder of XML schema is hard and requires understanding lots of things. I would rather gain understanding of those tools and use them to build my project then to build them then get to work.
Pingback: Whatever Happened To Programming? | JetLib News
All that being so, during 2009 I found myself making one of those pesky custom XML systems and discovering, to my surprise, that XQuery still had the power to invoke that old dangerous excitement. You can do stuff in it without endlessly repeating yourself, and you can roll the software pastry deliciously thin.
Unfortunately, since that experience, the daily load of acting under orders to embody data structures in Oracle and layer up the pancake Java over the top has become unendurable.
@Richard: “Can you imagine if, instead of needing to be competent in four languages (C, C++, Java, .NET of choice) you had to cover all those plus the 827 home grown variants to stay relevant?”
I think that IS where we are, and is exactly what jdietrich was saying as well.
Personally, when I am doing hobby programming at home; I get to be creative and solve things and sometimes it is fun and sometimes it sucks. Usually, when at work, my job consists of spending hours if not days trying to figure out why one library doesn’t do what it says it will do. Especially if that library has anything to do with web frontends. When you spend more time doing work arounds than actually using a library, that’s a problem.
From part 1: “For the program to stop being a private project and become a public product, it needs documentation — APIs, command-line manuals, tutorials. It needs unit tests. It needs a home on the web. It needs checking for portability. It needs changelogs and a release history. It needs tweaking, and quite possibly internal reorganisation to make it play nicer with other programs out there.”
I’ve been writing code for most of my life, but only off-and-on, so forgive me if this sounds ignorant, but, could some libraries be written in the popular languages (Java/C++/Python/etc.) which perform more of these tasks?
That is, could some phase 1 work be done to minimize the phase 2 work, leaving more time for the more pleasurable phase 1 work?
Warning: long comment ahead.
I sort-of agree with you here, but I’m from a generation of programmers who have not had to implement everything from scratch. I never played around with Basic on those really old computers. My first programming experience was at University in 2004, writing Java. As such, I can’t really relate to your arguments – the VIC-20 was discontinued before I was born. The Sun 3/80 was probably released not long after I was born. And so on for your other examples – you might as well be talking about how much better Mars was before this newfangled ‘Earth’ thing came along.
And yes, I do indeed spend a fair amount of time pasting software libraries together. I don’t mind it. It’s interesting, in it’s own way. I enjoy the extra speed it brings me. I enjoy that I can focus more on the things that are important to the business. And I really don’t enjoy duplicating effort. Implementing my own malloc()? What the hell is the fun of that? It’s been done. And it’s been done well. Surely there are more interesting things that no-one has done yet (or done well)?
Let me expand on this point a little more. I assume you’re a good coder (you sound like you *care*), and from the sounds of it you’ve only worked with good coders.
I currently have the joy of maintaining a legacy system that was written long before there were many Java libraries available. This means that we had to reinvent the wheel for pretty much everything (by we, I mean the company – I joined in 2008).
ORM solution/Database Access? Custom. String Utilities? Custom. Job Scheduling? Custom. Logging? Custom. And probably a bunch of other stuff that I can’t remember off the top of my head (MVC framework was, thankfully, not custom).
This may not sound too bad, but I can assure you that it really is. It’s hard to maintain because the original coder is long gone. It’s full of bugs.
And, far more importantly, it is *extremely* inefficient. Quite amazingly so in many places. This hasn’t been an issue for years, but it’s now coming up, and we’re basically screwed because the core of our system, tightly integrated with just about everything, is a major bottleneck.
Would this have been solved by using a library? Maybe, maybe not. But if we were using a popular library, at least we’d have access to a huge number of users who would be able to help us out, suggest things we could try. Instead, we have to try and fix it ourselves.
Part of the problem is that we haven’t invested much in our custom solutions. But why should we, when there are plenty of alternatives out there? Freely given away by people who are genuinely interested in writing that code.
So yeah. I’m afraid I can’t really see your problem with libraries. Maybe if you only hired good coders, you could get away writing everything from scratch. But here’s the thing: that’s pretty hard. You’ve got to be pretty successful to afford to be able to only hire good coders. And probably not too large either. So most companies are going to have some good programmers, some average programmers (like me), and maybe some bad programmers too.
The more code you write for yourself, the more people you will need to employ. If you want to duplicate these frameworks, you’ll need an awful lot of people, and you will need to keep them employed so they can keep your system up to date. Chances are good that you’ll end up with a bad programmer or two (or 10) in there at some point.
I don’t know about you, but the bad programming I’ve seen is pretty horrific. Huge amounts of duplicated code, naïve algorithms, and a complete lack of good programming habits.
These aren’t people I would want to keep around, but the more people you need to employ, the greater the chance of ending up with one of them, and the amount of damage they can do is insane.
So the solution? Employ less people. Take advantage of all these open-source libraries, and channel the resources you would have used for duplicating them to A) gluing them all together, and B) writing code that is business-specific, that no library could possibly provide.
The other advantage of libraries is that they tend to used all over the place, so when you hire someone new, you can tell them ‘We use libraries x, y, and z, and frameworks a, b, and c’, and you don’t have to train them on your own custom solutions.
Which also works in reverse. All the stuff I’ve learned about the custom database stuff for my company is *completely* non-transferable. Next company I work for isn’t going to give a flying fuck that I’ve worked with and made improvements to a custom ORM solution. They’re going to expect me to know things like Hibernate (probably the most popular Java solution in this area).
Anyway, I believe that the amount of developer-hours you waste gluing stuff together is only a fraction of the amount of time you’d waste reinventing the wheel, to the same or better quality than what’s already available. Maybe you don’t enjoy it all that much. That’s fine. I have no problem with it, and find it far, far better than the alternative (which has the potential for so many problems in the long term that it’s just not worth it). Point is, I suffer from framework fever from using our own, custom framework. That is, doing what you suggested has caused the problem you were trying to prevent. Woops.
As a Presentation Layer guy I can tell you that I’m seeing a shift of the types of successful developers out there in the field. Those developers that can bounce around between different API’s and syntaxes are the ones that are in demand and those developers that know one technology or platform well aren’t.
To become a successful developer in the next decade, you must be a generalist. It’s a completely different way of thinking. You have to actively try NOT to learn too much of one platform for fear that it’ll bias you against all the other languages you’ll have to work in. For example, coming from more structured languages, seeing the jQuery chaining and use of anonymous functions would turn off most developers and they’d shy away from using it. However, it’s the best tool for the job currently, and not using it based on it’s weird syntax would be a mistake. Same thing applies to MXML, WPF, LINQ, C#’s var, etc… and all sorts of new improvements to languages that people don’t use because they’re “different” and give you “less control”.
I’m mostly a Perl developer and recently I’ve been working with Groovy (a dynamic language for the JVM) and I think I understand what you mean.
In the modern Perl world (not that crap written back in 2000) documentation is usually helpful, with examples. I always took good docs for granted. I even complained about them, to be honest – theu could be even better!
Then I got into Groovy and its Java-land heritage. Oh boy… The documentation is verbose and most of the time useless. You need to search Google for everything and scrape information in blogs, bugtracking, mailing lists, while in Perl you get ALL the docs from CPAN.
Groovy is a great language and has most of the capabilities of Perl (and also additional features). Which makes me believe this is a community issue.
People in “enterprise land” don’t care about fun in programming, they usually don’t care that much for quality either, as long as it works. Spending time to actually WRITE good docs is seen as a waste of time, after all, hundreds of pages can be automatically generated, right? Right?
Unfortunately this is the environment where most of Java development occurs. People who want to have fun in programming are going elsewhere. Ruby, Python, (modern) Perl and others – that’s the way to have fun in programming these days. But even then, I could bet that these “Enterprise Rails” software shops I’ve seen around are just as bad.
On a final note, frameworks *can* be flexible and built in a way that it doesn’t trap you in a cage. In case you’re interested, take a look at the Catalyst Framework for Perl. You can’t do a high quality demo with it in half an hour. But I could also bet that a good Catalyst team would beat a similarly good Rails team for a reasonably-sized project in both quality and time to deliver, although it would lose badly if you’re just evaluation the tech demo scenario. I guess that’s one of the reasons Rails has the spotlights and Catalyst is relatively low-profile (although several large players are using it as I write this).
Bottom line is: it’s a culture thing. Run from the enterprise and you’ll find fun again.
I think the key issue is the mental state of flow. When programming was a “conversation with the compiler” the work was algorithms, structure, filter logic, optimization, tool building. Work involving intelligence, logic, inspiration.
With oop, vast and incompatible libraries, syntax, parameter, and interface impedance taking over, the work becomes an endless bug search in the dungeons and dragons of opacity and inadequate documentation. Yes, there is lots of code to reuse, but the quality is so abysmally incompatible that the work becomes drudgery instead of flow.
We need better tools. But those who lament the lack of deep computer science will not find it in business programming. They need to work in embedded, or go to work for intel research figuring out how to design algorithms that make efficient use of parallel hardware, or something that can search a pc’s hard drive in 10 ms instead of 2 minutes.
Solutions to interesing and hard low level problems are valuable, but guess what, they are not easy, and don’t involve copying the textbook. But think for a second about how fun it must have been to design skype’s alogrithms.
The problem with the vast libraries of enterprise software is that noone can fix it, exactly because it is huge, an industrial sized rather than human sized problem. It’s no longer programming, it’s system integration.
Part of this boils down to the fact that production and creativity compete for resources. Both are needed for programming. Any unilateral solution will result in a business failure, in addition to sparking a philosophical holy war.
It costs a lot to have a person put in lots of time on a project. It costs less to spread that time over multiple projects. A CS program may teach algorithms, data structures, numerical methods, programming languages, and operating systems. Yet it does not usually teach, in my experience, the reason why a company should hire you.
Either you get internships in the real world or you realize when you graduate that engineers look up math results in databases; they don’t write equations. That means the choice of grad school, database coding, maintaining existing code bases (a big chunk), or being the cable guy.
Then came the web, and it offered growth that was worth the investment. People paid for creativity because profits were through the roof. Then the bubble burst and things came back to reality. Existing toolkits began to dominate a tight work-flow.
It works the same with books. It is a lot more profitable to recombine and tweak existing material to meet new needs. Or you can out-source the product development to mavens doing it for the love of the game and the hope of future royalties. In other words, side jobs.
In publishing, however, authors writing original copy are few, save or a number of established luminaries. It probably costs at least $50k to develop a textbook.
It’s been a long time since I got an ancient ATI graphics card to fake 64 colors on a CGA monitor by interlacing its secret 16-color mode and telling DOS I was really in regular CGA mode. It’s been a long time since I ever needed to conserve space by jumping to an odd memory address or hand-optimize assembly code.
I can’t imagine that anyone would give a rip about that on today’s hardware. Yet they should. A bad algorithm will still hose today’s computers. Resource limitations used to force programmers to be elegant, like the Jedi knights instead of the Storm Troopers. Well-designed code does well under pressure.
One of the reasons I went to Linux and FreeBSD for desktop and server use, respectively, is that I bought a graphics suite that was clearly the result of coders hacking modules together. Open source drew me because the elegance and robustness sells the product, not cost. I shook the dust of my sandals at Windoze and never looked back.
I think I know exactly what you’re talking about, though your solipsist presentation perhaps obscures your message.
The guy who said he wanted to plug in what he could and then proceed to the next issue is an epitome of the problem.
When I worked at a caching company in ’98 they configured Squid with excellent hardware and found by tweaking the software very carefully that could get as high as 9 HTTP transactions per second on the standard benchmark (polygraph). Since the people who wrote Squid were obviously very smart, it was pretty clear that this was as fast as a cache could run. They could confidently tell the VP that. Squid is also a very large and complicated program, so it’s not like we could try to start a project to replace it with something better.
This kind of thinking [ geniuses wrote our libraries, manage expectations to the best that can be accomplished by configuring the library ] is incredibly dominant in SOME companies. As time goes on it seems to be more like MANY companies. But it is always a choice of the programmer culture. It’s who you choose to associate with.
They’ll tell you proudly that their web-site fields 20 MILLION transactions per day, and it turns out they use 40 Apache front ends which talk to another 40 Tomcat servers and 4 database units each of which consists of master, slave and I forget what. And you’re thinking about the other place you consult for where they do exactly the same work load on one box with a hot spare. While you listen to the CTO explain how effective their scalability strategy is even though when you get right down to it all their problems are because it is damn hard to provision or move hundreds of servers so that they can have COLO freedom. Funny, the other company could (and has) moved in part of an afternoon.
What is IMMENSELY painful is that way the standards of achievement are lowered. These guys are not saying, “this sucks, we must do better”, they are saying, “meh, I guess that’s as good as anyone can get.” Suddenly there work IS as easy as plugging in X1 and X3, scribbling up an X2 and moving on to Y. It’s just that their work does NOT involve being competitive with anyone who knows what they’re doing….if there’s anyone like that competing.
What that guy doesn’t realize is that he (probably) never really solved problem X. He kind of got it to work in some circumstances, if you drop the obvious requirement that it complete in a reasonable amount of time, etc.
The actual name for the type of programming you are describing is “Prototyping”. Since we in fact live in a world where “Prototype” is as good as software usually gets, maybe you need to get more comfortable?
Working on a small embedded linux system recently, I observed that DL-Malloc in fact completely sucks for limited-RAM multi-threaded systems. If you turn on full reporting you’ll see that malloc is grabbing more core for Arena 3 even though Arena 2 has 12 MB unused. If you tend to malloc objects in one thread and free them in a different one, you’re going to get a ton of Arena-switches if there is any contention. So really if you are doing small-scale systems (or indeed many things which mimic small-scale systems) and you’re using Doug Lea’s malloc (the standard one) you are screwing yourself. Then when you start digging through the source it’s solid self-congratulation on how cool and efficient it is, even while obviously being tuned for some pretty specific (though not atypical) use cases.
So write your own.
Thus the answer to your library question is that libraries are often kind of a myth. Problems are not general, problems are specific, and every general solution is an approximation. Very complex general solutions (libcurl?) are essentially fit for nothing beyond prototyping.
If a library lets you reach 90% of the potential of the machine, and you are using 20 libraries, then you are only reaching 12% of the potential.
1.) The era of single language programming is over. Successful programming requires a multilingual approach. Juggling multiple languages and platforms can require quite a lot of creativity.
2.) Agree with everything else.
“Here’s a rule of thumb: beware of anything that calls itself a Framework. Anything that, instead of providing stuff that you can call, takes over the wheel and tells you what code to provide for it to call.”
Anyway, the question is how to make the libraries we use usable. It’s not impossible. POSIX, for example, seems to work well. Same for C runtime.
In my experience (been there, done it) creating a library that’s fun to use is 3-4x more costly than creating a dull, messy framework.
No, would you be willing investing $400,000 instead of $100,000 to make your library convenient to use?
A clever fellow once found some research that said it took ten years to become an expert in many areas… I would rather gain understanding of those tools and use them to build my project then to build them then get to work.
@Richard: 50% of the components and libs you spent your time learning hard will have gone away in 10 years. The fundamental programming skills have a much longer half-life. I suspect that this is one the reasons why “older programmers” shy away more from indulging in new libs and frameworks. Learning something in the lib-department often seems less satisfying the more you suspect that this is “dead knowledge” already.
I completely agree with you and that is why I became a quant-developer. Five years ago I realized that programming has become dead boring. It is a redux of things that I have done earlier. Yes yes there are changes.
But as a quant-developer I get to write Excel programs and code that is purely algorithmically based. When I write auto-trading systems for the market it is one joy after another. And I really do mean one joy after another. I get to run algo’s and because the market changes my algo’s have to change.
So while I can understand as a general programmer you will be bored silly, as a specialized programmer I am having the time of my life….
And whenever I listening to my colleagues talking about the latest Visual Studio or feature, I just roll my eyes.
For the last n years I’ve been writing applications which use a web browser as a terminal. Many would describe it as web programming, but I don’t like describing it that way because the applications follow the same principles as the old mainframe applications I wrote in the ’70s. So semantics aside, you’d be excused for expecting me to program in PHP, or Perl, or even Ruby or Rails. Nope. Many times have I surveyed the tools available for easy web programming and there are just too many that learning them would take longer than simply getting on with the job using the tools I already know. Rather than learn Perl plus dozens of related modules, I use C plus my own standard library; SQL (Postgresql flavor, embedded in C); and HTML and CSS (I’m considering jQuery but haven’t yet found the need.) I’m not able to grab a blog and a calendar, should that be needed, because I haven’t yet needed them and so they don’t just plug in. And this is good. I enjoy my work, which is highly creative, and I can make much stronger statements about security than can people who use Rails. And my apps run faster, too. And honestly, when you factor in up-front design, end-user documentation and testing, all of which takes the exact same time regardless of the implementation language, I don’t think it takes me longer than it would using PHP (assuming I was fluent in it.) So my advice is to ignore the maxim about not re-inventing the wheel. Use components that make sense for you to use, and yes, write your own web server when that makes sense. (Web servers, by the way, are not so complicated. You can write one using nothing more than shell scripts.)
I completely agree with your Point 2 – frameworks try to control the programmer, a well designed library sets them free. See also http://lwn.net/Articles/336262/ wherein a ‘midlayer’ is another name for a Framework.
The web is a framework. Some of your gripes are rooted in that I think… XML and CSS and JS are just so many layers of cruft between your UI and the actual code you want to write.
I really enjoyed my last job which involved a lot of graphical work with Qt. It’s a really big library but you are in control, and Qt is so complete that you rarely need to look elsewhere. The documentation rocks too. I was able to become fluent and know instinctively how to code just about anything. But now I’m back doing EJB stuff which I hadn’t done any of since 2001… what a nightmare. Not only does EJB suck but there are way too many “best practices” and patterns I’m expected to know and follow. Layer upon layer of extra indirection and interface abstraction, most of which is completely pointless donkey work. The application I’m working on is absolutely the most boring ever, too: a contract project which is simply not useful to normal people at all.
I had a job once at which we wrote useful, commercial software in Scheme, nearly from scratch. That was quite fun; just the people there made it much less fun than it should have been.
Another place you get to start from scratch is in some kinds of embedded projects.
We need to re-invent the web as a client-server programming platform rather than a steaming pile of inefficient markup and scripting languages. Forget it all and start over with just one language, designed for message-based distributed processing, with a small set of bindings for creating GUIs.
The reason why we do a lot of plumbing of pre-assembled pieces is that we are building very complex systems, yet we are given much less time than ever before. The result is often bloatware that uses hundreds or thousands of libraries, but we can finish in a month instead of two years. What do won’t achieve on this route is innovation on the way: in the old days, by re-addressing a known problem, perhaps from a slightly different perspective, people often invented new algorithms. A lot of theoretical work in the 1960s, for example, was inspired by the need in a system for a customized version of something that already existed. For example, I remember reading a paper about a syntax-aware editor that could tell you efficiently whether the current line of the source code could possibly be part/substring of any well-formed PASCAL program or not. In today’s world, people just store the parse tree for the whole source file in RAM (read: syntax checking in Eclipse) or take an ugly shortcut (read: syntax highlighting with regexes in Emacs); but in the 1960s, they treated this as a formal language problem, solved it, and implemented it.
The reason why we know less about more (XML, CSS, Java, C++, DSSSL, DHCP, HTTP, ….) rather than more about less (SICP) is that the world of computing has become more specialized on the one hand and on the other hand there are many issues where different assumptions (and sometimes ideologies) license different technologies.
The funny thing is that despite all open source and library abundance, it’s still necessary to re-invent the wheel very often…
I wholeheartedly agree about your views re: frameworks (event wrote a paper on frameworks versus toolkits).
PS: No, you should not be using Ruby instead, you should be using Scala ;-)
I have to agree with both you, as well as another comment made by Richard.
I too got into programming for the thrill of “creating something from nothing” but chose not to be the type of higher-level programmer that glues widgets together. I started writing code for micro-controllers in an electronics / instrumentation class and still do that to a large extent with embedded and mobile devices.
I have the feeling that you would really like that type of work too.
I agree with Richard in the following sense: If, as a software developer, you feel you’re no longer writing software, but just gluing code together, then you’re working at the wrong company on the wrong project. Either that, or maybe you should just get into management and start telling people the right way to make software.
Realistically, there are lots of companies (especially in the embedded universe) that need people who write efficient code. Many of those places embrace open-source. It might come down to a compromise between a) the great benefits & vacation you get at your current multi-national super-corporation gluing code, or b) having fewer benefits and less vacation at a job doing something more interesting. If you’re a family man, then it’s probably best to stick with option a) and hack at more interesting projects in your “spare time”.
If you’re just bored, then I would suggest joining an open-source project such as Linwizard or OsMoCon. You’ll need to invest < $100 to get the hardware but you have the potentially to make a serious impact. If you some teenage kids then they might even be interested to help – and then you also do something fun that brings you closer with your kids ;-)
It depends on the library and it depends on the framework and then, again, it depends on your intentions, your software design, your requirements. The great skill here is to have knowledge of these libraries or frameworks and to match them appropriately with the problem you are trying to solve.
People get into trouble when they try to shoe-horn Application A into Framework B, where Framework A would be a more suitable environment.
I agree we have arrived in a scripting age (Python, Ruby). Frameworks dictate the exploration of knowledge in huge class libraries. I usually create a prototypical application representing a family of applications. Then I create a template for the Modgen generator (http://members.home.nl/wijnenjl/Modgen_About_EN_01.html) in order to rebuild the solution. It’s programming the system to re-generate its generalized set of programs. This way of working is awesome and powerful. James Wayne.
As a Java analyst/programmer, I find the phase 2’s can be fun as well! Use tools like Jira, Confluence, Hudson, Selenium, and take pride in knowing that what is documented will remain your legacy as the ‘application outside the code’. Certainly it makes systems much more robust and maintainable for the next generation of developers.
However, I can see that there would be a world of difference between producing phase 2 items for internal applications vs a more public commercial product, but then isn’t that what professional technical writers are for? :)
But I agree also. Something that is not taught at university is that almost no Java application of substance would not benefit somehow from the efforts of the open-source communities, and IBM, Apache, Spring, JBoss to name a few, Sun too of course. We ‘stand on the shoulders of giants’. Most of the day-to-day challenge for software engineers is being well read and experienced enough to advise on the capabilities and limitations of 3rd party libraries, and how they should or should not be used (without necessarily remembering the exact APIs!) to solve a particular real-world business problem. Of course the same applies to recognised paradigms (or ‘buzzwords’), and the proper application of development tools!
The web-development domain changes so fast that I enjoy that challenge, phase 1 or 2.
Oh, and incidentally (before there’s too much Java-bashing!), I did find the grounding in:
– C, vital to understanding memory allocation and writing clean efficient algorithms.
Java is definitely over-verbose, and most apps are a memory hog. But annotations and other Java5 technology reduce some of the development pain. I actually prefer any of the above (or even Perl or PHP) to write, but they are not often a commercial option. Java is far more ubiquitous, powerful, trusted, and everything ‘just works’ (bar notable exceptions of some non-reference application servers and JVMs!).
I definitely dislike maintaining code where someone falls into the trap of over-engineering a simple application with too much Java ideology, choosing an abundance of patterns and libraries over the application of some common sense!
@Richard: Addendum: “A clever fellow once found some research that said it took ten years to become an expert in many areas. I’m not sure why we expect programming, as a maturing field, to be any different. Yes, building an application thet involves jQuery CSS, Django, and a murder of XML schema is hard and requires understanding lots of things. I would rather gain understanding of those tools and use them to build my project then to build them then get to work.”
In theory that is right, but if only you had 10 years to become expert at something. In those 10 years your skills may be mainly obsolete, and the “current hype of the coding world” is all about some other badly documented libraries. And it’s not that I don’t agree with your – but I also agree with Mike.
First I am grateful that “[your] goal is to provide people with libraries that Just Work”. I believe small and useful libraries (appliances) are the solution: They are easier to learn, and require less documentation because they do only one thing.
Unfortunately software is going in the opposite direction: About ?5? years ago Axis (http://ws.apache.org/axis/java/index.html) was a hodge-podge of libraries. This was good because any one of those libraries could be extracted for it’s useful features. Still, it was more difficult than it should have been, but still doable by a normal human. Unfortunately, Axis has been retired for Axis2 (http://ws.apache.org/axis2/): A super-integrated web framework. Every part appears to be dependent on every other, and those part dependent upon ten+ libraries of their own. Nothing works unless you install the whole thing as a service. Maybe my requirements were odd, (I needed the functionality without having to install the server), but I found the framework intractable.
Do the gods responsible for the construction of Axis2 know they have rendered software unusable to a great number of programmers? Only those programmers making a career in web services can now use their product. The vast majority of developers that must learn to make just one web service must now go elsewhere.
Dammit, ipsi, if you’re going to go posting such cogent, well-argued critiques of my article as this, how the heck am I going to avoid having to back down and admit you’re right? I should never have allowed your dumb comment past moderation!
Seriously, you paint a depressing and all too believable picture of one alternative — the most likely alternative — to the library-infested modern programming paradigm that I was bleating about in the original article. Let me be clear that I do not envy you the job of maintaining Some Other Guy’s informally-specified, bug-ridden implementation of two thirds of an ORM; writing such a thing may be fun, but maintaining it is not — and maintaining some one else’s must be ghastly.
So as lame as it sounds, I may need to make yet a third attempt to state my case in a future article. Thanks to you and other commenters like you, I am starting to have a better understanding of what my case actually is.
I think the problem here is not so much that we use libraries and reuse code so often. Actually I love using GOOD libraries but I also know the things that cause grief here and suffer from it myself.
As I understand it the real problem actually is two more or less separate problems that can cause great pain to any self-respecting software developer:
1. REALLY BAD libraries. And especially Java has a lot of them. A BAD library is one that is overengineered without limits because the programmer who wrote it thought only about the people who might want to EXTEND it and never about the people who just want to USE the damn thing (at least you get this impression with some libraries these days). Also another problem with BAD libraries is that often the author uses every pattern the “Gang of Four” was able to come up with just because he thinks that way it becomes a well engineered piece of code. This often forces you to create a lot more objects and learn a lot of interfaces where you just think “What is wrong with doing the same thing with three method calls?”.
One concrete example I can recall is when I used some HTTP library and had to implement an interface, return false in one of its methods and pass the object somewhere just to tell the library that it should not transparently follow HTTP messages with a 3XX status code. Of course it is not that much work but consider how long it takes to find out that you have to do such a stupid thing when a simple setter would have been a much better solution.
2. Inversion of Control! This pattern is used by most frameworks these days. Of course it has its advantages like making your code very easily testable. But it also it clouds the control flow of your whole program and most implementations don’t do anything to make the effects less severe. Most even make it worse because they introduce “configuration”so that your loosely-coupled code gets integrated by some XML file or even worse by annotations that are scattered all over your code base. It’s far from being trivial to build up a mental model that allows one to really understand what actually happens in such a system where the control flow of the whole program is controlled by (a) the code written by the framework authors which in theory you shouldn’t need to knwo but which in practice deeply leaks into your program (this effect is what most people call magic) and (b) some sort of configuration.
Not only do I completely agree with you, I’ve considered this for many years, and I think I know a solution. (Even written some prototypes) But even if I could get a small team to implement it properly, it would take a commitment from library vendors that, frankly, I don’t expect.
I think the solution lies in an extension of the “literate programming” ideas of Knuth, coupled with full code transparency.
For a start, libraries need to be available as source code. This is already true for many, but there are a lot of commercial libraries that come only as compiled binaries, because they want to protect their IP. (As if we can’t read machine code)
But raw source isn’t enough. Even loaded into a literate programming environment with inline documentation isn’t enough. We need to take the literate programming concept to the next stage, and introduce data-driven ‘metaprogramming’ concepts into the editor.
For example, how many times have you encountered this problem: A new field gets added to a particular database record, so you have to go through the code and find everywhere that record is created, cloned, mangled, and translated to add a new line to handle that field. In large systems, we usually institute a ‘metadata’ system which (a) makes the code less readable, (b) complicates the handling of otherwise simple operations (c) prevents the compiler from optimizing the code, (d) then has to be broken anyway for that one special field.
Now imagine if the metadata is not used by the running program, but by an intelligent code editor. You still maintain metadata, but instead of the binary code working through this table at runtime, the ‘metaprogramming’ layer lets you write micro-programs that generate the ‘output’ code based on templates and the metadata.
This sounds like a code generator, and in a sense it is, but it STAYS LIVE.
So, if you need to add a new field, you add it in the metadata, and it auto-magically boilerplates out to all the routines which deal with it. New routines take advantage of existing work. Refactoring also becomes rather trivial.
If libraries shipped in this form, then adding that one extra field you need for your problem, or tweaking the templates to deal with unicode instead of ascii would be enormously easier. Instead of writing glue, we modify the library so it doesn’t need any.
It’s basically ‘showing our work’, in the sense that all those structures and relationships that we, as programmers, keep in our heads as we write large wodges of code are made explicit, and therefore modifiable by others.
And there are many other advatages, too many to go into here. I’ll just mention one: multiple output languages.
I’m sure there are code generators out there right now being used to create libraries, but of course we (almost) never get those tools along with the library, just the end result. Even when we do (such as YACC or LEX files) any integration with the rest of the IDE tends to be broken, or the final code is heavily edited, making those resources useless for future work.
I’m aware that creating a multi-level system of code generators that feed code generators is both tricky to get right and can lead to some truly spectacularly bad coding and hard to find bugs, but that’s exactly the point where the art and science of being a programmer comes back.
I think you’d like it.
With 45 years of (not-always-constant) programming background (starting with AUTOCODER assembly language on IBM 1401’s), and after having learned or invented dozens of languages, I lost a job last year ’cause it required one of the languages I didn’t know. So, in the last year I’ve had an enjoyable time designing programs in Assembly, FORTH and LISP/Scheme. If I need C I just write the app in LISP and use macros to generate the code. OK, libraries can save time, and I use them when it’s appropriate, but for the last year I’ve really been enjoying programming again. For me it’s all in the design. I’d rather design something elegantly through UML or a FSM language and generate the code. Optimizing an elegant design is a lot more fun than Chinese Menu programming.
It is like I always say:
People need computer software that actually works.
Good programming involves avoiding as much work as possible, so that one can focus on the right problem and nail it. It is a delicate balance between avoiding and embracing work, with a line that is constantly in flux. When judging legacy code, keep in mind the environment of yesterday rather than today.
Languages and libraries (even hardware) seem to be subject to a kind of law of intellectual entropy. Exponentiating combinations lead to a reduction in the ability to do work.
Solder knows his riffle. General knows his units. Many of them. All with great flaws.
Modern programmer must command imperfect armies of libraries most of the time and be a solder when there is no one to command. If only we can delegate that “general” stuff to AI and be solders all the time. What does it remind me of?
@xk0der … don’t paint with so broad a brush. I happen to like both coding and writing documentation. But the latter only when it applies to my own code (or systems or networks, as I also enjoy doing system building and network building).
What I hate most about programming is when I have to, or am expected to, use the wrong tool for the job. That covers cases like using libraries that don’t do what I need, and I end up spending too much time (and thus destroying the time savings I should have gotten by reusing code) trying to make things work for something other than what they were designed for.
Basically, what I hate is patching, remixing, and mashing up, other people’s code and tools to get the job done, when it produces an unclean product. Then I have something I hate to document, and refuse to support. It’s probably buggy as hell, but I have no idea where those bugs might be, and would have no clue where to start, when some bug finally shows up. It really isn’t my code. I so much want to wash my hands.
I’m a web developer.
I think there’s something of a difference between traditional software development (programming stand-alone applications, embedded systems, microcontrollers, etc) and today’s web development.
Web development is product design – what is important is the end-user experience. And given the nature of the web development environment – i.e. the internet – it makes sense that a web application is a collection of libraries, APIs, etc, glued together.
If you’re concerned about convention, then, as you’ve noted, there are some sound frameworks such as Ruby on Rails, and Symfony that give clear direction and boundaries on convention and consistency of code.
As far as the user experience goes, it makes sense to utilize existing libraries for presentation so that we can produce an experience that is intuitive and follows established conventions of user interaction and experience.
What surprised me the other day was that I wrote a small script in PHP to parse a large XML file, and then ported my script across to a framework (using its built-in libraries and methods that largely just abstract the PHP-level functions I was using originally) and discovered that the new applicaiton ran so much faster than my original, despite there being more effective lines of code, but it actually required less lines of hand-written code in the framework than my original script.
Now I’m not a professional programmer, I’m a web developer, so perhaps my approach in my raw PHP was flawed, and far-from-performant, but this is the beauty of the modern web framework.
If beauty of code and making the most of your chosen programming language is my goal, then I fail miserably, but if quickly producing a solid application that is easy to use, and has extensive functionality is my goal, then I’ll keep using frameworks.
Using good, modern, web application frameworks (which are few) doesn’t require a lot of patching, remixing and mashing up of other people’s code – if it works and it conforms to the API, then there’s usually no need to touch it.
Although, having said that, while I’m happy with my approach to building web applications, I do still have this quiet nagging feeling that I don’t know exactly how all my code works from top to bottom.
Maybe I should program a microcontroller from time-to-time :)
You’re in the wrong job. You obviously don’t enjoy building soulless business data systems.
Move into embedded programming. It’s still very much the same kind of development work as in the old 8-bit days you remember so fondly. The fun isn’t gone out of programming, it just moved sideways!
(I used to write games in the C64/Amiga era, and now do embedded work, often involving high-end graphics or audio, and it’s kept me happy since the games business became ‘officially no fun any more’ when it all went bigtime and the money and suits got involved.)
now, u have no rss, so I will never read anything from your blog!!! repent !!!
Tom, there are RSS links for both articles and comments at the bottom of the sidebar.
I got my bachelor’s degree in General Engineering and only got into the system development gig a decade later. Now, after another decade of designing and building systems I think I can shed a little light on what’s happening based on my non-programmer’s background.
Programming is in the midst of a major evolutionary step that many other disciplines have passed through, and it will require new ways of thinking about how we build systems and how we train those who build them. We train most new programmers as “computer scientists” — that is, people who are trained to design new and better algorithms, compilers, etc. That is good; we need people to do that. But it also reflects a field that is still in its infancy. 30 years ago programming a computer to do simple things was a much more complex task than it is today.
With the advances in programming techniques, many tasks are relatively mundane and do not require the skills of a computer scientist. They require the skills of an engineer: one who can solve problems and assemble solutions from available materials. Putting a computer scientist to work on such journeyman tasks is like asking a materials scientist to assemble a house. The material scientist would probably rather employ his training and invent new or better materials instead of doing the work of a carpenter. This is not to denigrate the carpenter; just to point out that a carpenter’s skills are different from those of the material scientist. By the same token, the carpenter would likely rather craft a well-built structure than spend his days experimenting with alloys.
The bottom line is that computer science and programming are not necessarily the same task, and expecting them to be will frustrate those who would prefer to work on one and not the other. It is time we recognize this so we can train and employ people accordingly.
Louie, thanks for this. I think it’s the best, clearest and most accurate statement of the problem I’ve yet seen.
Good article. I’ve been programming in XML recently. XML for God’s sake. *sob*
point (2) and the tangent re: demo code – is perhaps one of my greatest frustrations in coding. For a classical example of a piece of democode sold as a usable system see: “windows” (any version, although the current ones are starting to progress).
Flip side I’ve had really great experience with the MacOSX frameworks and reasonably good success with gnome toolkits. (I’ve never really adopted KDE, mostly due to historical reasons that no longer matter. That could change but hasn’t yet for me)
I guess two things I’d recommend to authors out there if they don’t want frustrated developers:
1. don’t release demo code as production. Ensure tested, ensure that variance in inputs are allowed outside of test code (or are at least handled).
2. Be consistent. Inconsistent code can cause all kinds of debugging problems down the line.
Having been a key developer on two major enterprise Rails based applications, I can tell you that its very problem is that it *is* a framework that you must conform to.
For our third major enterprise project, we have switched to Ramaze because we have grown tired of fighting Rails every time we want to do anything the least bit creative.
Ramaze’s philosophy is that it conforms to your needs, and we have had a lot of success plugging it in on-top of the enterprise quality libraries we have built for our application.
Of course, Rails does have its good points, like the excitement it has created for Ruby, and the ability to very quickly write many modern web apps, but it didn’t fit our needs, and it doesn’t sound like it’d fit your needs either.
Good article. I’ve been thinking about this from a slightly different direction for a while now, and have not really come up with a satisfying answer.
I have two kids, ages 9 and 10, and they occasionally express interest in writing a program. The problem is that the way I learned to do it, using BASIC on a Commodore PET or Tandy TRS-80, starting with ’10 PRINT “Hi There!”‘ and working up to PEEK and POKEing into the hardware does not really seem to be available any more.
What does anyone else think? Have any of you cracked this nut?
Actually, I do have a solution, which seems to be working, for getting my sons up and running with computer programming. I’ll blog about it soon.
Mike, can’t wait to see what you do for your kids.
Kids nowadays have missed out a lot.
Programming is not fun anymore.
Blame it on the web (yes, I hate web development), blame it on new gen programmers (coders is suitable), knowin’ a gazillion class names and not knowin’ the difference between an array and a pointer to array.
Lately everyone can write (nasty and buggy) software, c’mon, even in virus writing there’s been a regression in term of skills needed (remember writing 128 bytes little programs in the old days, quite challenging), now tree lines of basic (cuz the real virus is the os, and not only windows, even on linux box you can create viruses, but there we call ’em rootkit exploits).
Frameworks… mmm, the only real need of frameworks is for managers to fill up their mouths, yes, you spare a lot of time in the dev phase, but when you have to do something slightly different you pay all that time back. Interop between the maelstrom of libraries is the next enemy. And it happens both to free & non free libs (OpenView from HP is a good example, the Axis2 example is a good one too).
Someone must tell me why with 55kbytes I was able to draw endless 3d worlds (on a tseng et4000), using just 1 meg of video memory and 4 megs of ram, now for running a graphical clone of elm I need a rig with 4 gigs and 4 cores (cannot call’em CPU, sorry).
Web Development is the real culprit of this situation, well actually modern expectations of web development. Back in the very beginning of the web era, writing web applications was intriguing. You had to use your brain at 120%, now is just a race against other players. You have to be fast, so you glue something and hope it will work (at least as long as you’re in charge of maintenance). This is modern coders way of thinking.
Lately everyone is focusing on web developing, oh, yeah, I also earn my dirty money with web, but why? Why shall I write an order management application which slows down the end user? (compare a web application for order or invoice management with a good old terminal application and you’ll see what I mean).
The Web may seem wonderful, but is merely a toy.
and please…. don’t tell “I’m programming with XML”… a programming language is far better than a mere data description formalism.
But this is just another hard evidence that professional programmers are very very rare.
(sorry for my bad english)
PS: Knut is completely right too
Enjoyed both articles and you have it spot on.
Programming these days mostly involve the web with a very thick software stack. In the past a simple double-click will load code and data into memory; nowadays users expect the programs to work online and offline while being stable, fast, multiplatform etc. What this means is that two tricks of computing, indirection and abstraction, are just going to accelerate to support these features.
So now the trend is towards virtualization. Yes, the stack has become so thick we now need to abstract the physical hardware itself in order to get our programs to run fast, if at all. In this regard programmers have to adapt our tools and methodologies to this paradigm. Thus the growing popularity of functional languages like Haskell, Scala, ML/Scheme, LISP, Clojure etc.
I’m learning Clojure now because it is pragmatic and functional, doesn’t throw out the investment in Java frameworks and has its vision firmly set on the multicore present. That, coupled with the fact that a lot of AI/data mining/KDD software is written in Java seals the deal for me.
My big breakthrough came in the early 70s when I had to write a system to let our users work with APL under TSO, but the deal was we had to control their system resource usage and our overall project cost. (Computers charged by the second back then.) TSO was the time sharing option for the much reviled OS\360. (One programmer actually set off a bomb at IBM’s 5th Avenue office in protest. That’s how bad it was.) It took me a few days. IBM terminology might as well have been Latin, but once I got my bearings, all the pieces came together. PL/I for the high level components, BAL for the low level stuff, TCAM for terminal input and output. (Real input and output involved punch cards, tape and printers.) I still remember the heady feeling. I had absorbed an alien mentality, but with my new lobe of brain, I could move mountains, or at least type on the user’s 2741 terminal.
There is nothing like the power of a well written library or framework. Even now I write little hacks to swirl video images from my iSight camera just because Apple’s framework for doing it is so pretty.
Over the years I’ve written compilers, operating systems, and nterpreters as well as a ton code at the application level. At every level there is always a lot to learn, but it is that knowledge gained that gives a sense of power. Yes, there is often something or another that looks trivial but winds up nearly impossible. Sometimes the machine lacks an appropriate interrupt mask or there is a known race condition in the interprocessor hardware queue. Other times it seems impossible to find out where the cursor is or to keep the font specification from getting overridden. That’s why most programmers are paid better than the guy who makes fries at the fast food joint. He has to deal with one nasty, stinky, hissing deep fryer full of boiling oil. A programmer has to deal with dozens of things almost as nasty and recalcitrant.
One reason programming isn’t always fun is that there are an awful lot of customers who just want to do the same thing, and writing the code just means going through the motions. It’s worse when they specify how the code has to be put together. You have to do the hard work of getting the pieces to fit, but without the fun part of choosing the right pieces. Let’s face it, an awful lot of programming is boring. If every customer wanted to do something really new, the job would be a lot more interesting.
On the other hand, I understand the frustration of poorly designed frameworks. Even the french fry guy has a handle for the fry basket. Programmers often find themselves with burned hands for lack of an equivalent. It might be interesting to put together a proper critique of the flaws in existing libraries and turn it into a book that could be used to help teach the programmers who design and build our libraries how to avoid these problems in the future. Only in rare cases, like with TCAM, is the problem actual malevolence. Usually it is just ignorance or lack of imagination.
You make some good points. On the other hand, I grew up in both the “build it yourself” C64 generation and the “wire things together” Java generation. I much prefer the latter, because I can actually get things done more quickly. The pleasure in coding isn’t gone, it’s .. different. More of a sense of accomplishment tied to the organization/business/mission than in the sheer pleasure of “Just Coding”.
Arguably the world has been trying very hard to eliminate the incentive to think in terms of “just coding” as it doesn’t actually provide much economic value. (That’s because western society places higher social value on economics than on aesthetics, I guess).
The analogy I’d use is this:
80 years ago, I bet crafting a refrigerator was fun. Not too many craft fridges these days.
In other words…. knowledge evolves, changes, and differentiation marches up the stack. This doesn’t mean there isn’t room for work at the lower levels. Just that it’s not going to be the bulk of work anymore. Times change.
Kaleberg, thanks for the excellent war-story!
You poor poor chaps. I’ve read these articles and comments with interest, partly because I think you may all be mad. I write computer games (quite badly, I’m a terrible coder) but I do get to write everything from the ground up. Some chap says “here is a psp dev kit, make me a game lowly coder scum” and I clap my hands together like a little girl with a new pony and start to do just that. The Sony libraries (or MS, or Nintendo or sometimes no libraries at all) just do the basic bits – all the fun coding stuff is up to me. Downsides – I don’t get paid much because I like doing independent coding on small projects. Getting paid more in the games industry involves working with morons in suits and designers who think coders are autistic serfs and they themselves are visionary geniuses. Swings and roundabouts. You chaps undoubtedly get paid loads for working on all that horrid web code so I humbly submit you’ve made your choice and been stuck with it. You’re working at the dawn of the internet on rapidly changing/not-nailed-down libraries in a bewildering array of soon to be redundant languages. I use C where possible, C# if I’m lucky, C++ if terribly unlucky. I beat people who hand me XML/JSON files with a stick although sometimes I have to write quick and nasty interpreters for very specific files. I don’t use an XML library to do that because basic string functions will do the job at hand and I won’t have to commit a ton of useless documentation to memory. (or useless object files to ram) Any tools code can be written in C# – perhaps the best language advance I’ve seen in years. Most of the choices are my own – and there is a lot of freedom and fun to be had coding without libraries. First you write a sprite plotter, then a font handler – and the little PSP flickers into life with your own fun stuff working its magic. Just like coding on a BBC micro/C64 but with more colours and less assembler. If you want to teach your children to code in a similar fashion to the old 8-bit days, give them a psp or gba dev kit, buy an old acorn archimedes off ebay so they can learn about structure and play Elite, or better still, hand them XNA and let them start by making games. Give them C++, Lisp, Perl, Php or something with SQL in it if they’re adopted and you hate them.
Problem is, this IS what we were promised. Remember back in the 80’s when the dream was all about “reuse”? Well, here it is. This is the age of of industrial software production.
It’s exactly the same as buying a bookcase. you can go to Ikea and buy a cheap, mass-produced Billy to do the job, which it will – after a fashion. Or you can get a craftsman to make you a custom-built one to fit the things you want to store.
However, the problem is that now, people can’t make bookcases any more; they just know how to use a hammer and an allen key.
My wall shelves fell down a few months ago. I priced Ikea (and Staples and a few others) for bookcases. They weren’t cinderblock and board cheap, and they were ugly. So I called a carpenter I knew and he had a cabinet maker (they’re all very specialized, and if you’ve seen the tools they have it makes sense) to make me a set of bookcases which he would install. It was maybe a 50% premium. The build quality is excellent with all those fancy joins that cabinet guys have in their libraries, and it looks great.
Yes, people still make bookcases. Check Craig’s List or the like the next time you need some. Ikea will win on price, but they are much better than Ikea.
I remember back in my programming adolescence that one of the holy grails that Object oriented programming was going to deliver was “reusable code”.
Be careful what you wish for. We’re now awash in thousands of pieces of reusable code that form a language unto themselves. A puzzle where the pieces don’t quite fit together.
The job of programming has changed fundamentally because most of the “interesting” algorithms that are commonly used have already been encapsulated in a class, library or framework. Short of specialized high-performance requirements and new developing fields of technology, just about any program or web app can be glued together from existing libraries with all the hard work in the integration rather than in addressing the actual requirements of the app. And as fast as a new technology is developed, there’s an open source library out for it on the ‘net. And of course none of us wants to be accused of reinventing the wheel.
Programming takes a different mindset now, because the types of problems we are solving are different. You either work at a highly abstract, generic level that takes you beyond solving specific problems into the realm of creating your own tools and systems that generate tools. Or you are a tile-setter, laying down the tiles created by the abstract programmers and filling them in with grout to cover the floor in the desired pattern.
I side a bit with ipsi. Ever been asked by the PHB why it seems to have tekn you 3 years to debug software that some moron wrote in 3 weeks?
But I also side with Mike. I became an engineer for 2 reasons. The first is that I am compelled to create. If I didn’t do it at work, I’d have no free time at all. The second is that I don’t much like people, so some professions wouldn’t be a good match.
Much like a blacksmith friend of mine complains against the use of pre-fabricated handrail components, I like re-inventing wheels.
Unlike him, I strive to only reinvent the wheel when mine actually is rounder and rolls better.
And I recognize that different wheels were made for different purposes, for all that they are all wheels.
Re: your comment on “useful documentation”: Amen, amen, amen, amen!!!!
I have especially run into this problem with Web Services, but I know it happens other places to. You get a description of all the stuff you can call and what you can pass, and THAT’s IT. And the person who wrote the library/Web Service seems to think that should be perfectly sufficient for you to make good use of it.
I have started calling this big gap in a lot of documentation by the name “intent” — in other words, I need to know not just *what* I can say to the service/library, I need to know *why* I would say it. What does it MEAN?!?
(For example, I have a Web Service I am integrating with that has a method for returning reporting data, which takes the parameter “ExcludeDuplicates”. Nothing beyond that in the docs. What is considered a duplicate? Why would I want duplicates versus not? Which duplicates are discarded, and which copy is kept, when I set it to true?)
Pingback: What Happened to Programming : Light Year Blog
In addition to all the insights here, I’d like to add that Moore’s law is largely to blame for the situation. See the graphs at the beginning of http://xlr.sourceforge.net/Concept%20Programming%20Presentation.pdf (Exponential evolution). If you are the patient type, it’s also discussed at length in chapter 4 of http://cc3d.free.fr/xl.pdf.
Pingback: Scripting Enterprise | Lusta.hu
I firmly believe that libraries have their place – they are there when you don’t have the time/resources to write it yourself… they are a crutch, a compromise and with that compromise comes the ‘baggage’ – bugs, quirks, limitations, poor/no documentation etc, etc… It’s a trade off – libraries seldom are perfect.
What happens if:
The library you chose is no longer maintained?
The library ceases to be OSS?
you discover a bug?
Where I work we use libraries (and frameworks) – I don’t always think it’s the right/best way to write *world class* applications/software – as an example a bug was recently discovered with one of the libraries we use *we* (not me) reported the bug and the reaction was ‘Yeah… weird!’ – *we* investigated and eventually resolved the problem.
From my (limited) experience libraries simply add another point of failure to your software – one that isn’t in your control.
I (and a number of my colleagues) agree with your original post Mike –
Whatever happened to programming?
Just my 2p.
Actually it should be “Poor me, my job must suck!” I would expect commenters would be lamenting about how mundane their jobs are mostly doing bug fixing and enhancements. That’s where my disillusionment is after 20+ years in this business. Things turned out a lot different than I expected when I got my Computer Science Degree in 1987. I have no passion for what I do anymore. I just collect a paycheck and wait for retirement.
The older I get, the less patience I have with writing tools like link list data structures and functions from scratch. It used to be fun to write them when I was in my 20s, but now I’m in my 40s and I’m now a lazy programmer. :-)
Pingback: Programming Books, part 3: Programming the Commodore 64 « The Reinvigorated Programmer
Mike I completely agree with you. I graduated recently from a top CS program.
I’ve got strong fundamentals, but I’m definitely from the glue things together generation.
I’ve always loved computers and programming, but I would sometimes feel bored or unhappy while working on commercial stuff. I’ve been forcing myself to investigate deeper. You have said exactly what I knew but couldn’t quite allow myself to accept. I really do love programming and that’s why its so painful to accept.
I now more clearly understand the attitude of many of the brilliant professors
around here. I used to wonder why aren’t all these brilliant people starting
companies and getting rich. I think the reason is that it wouldn’t be very much
fun to them. Although a new social collaborative media networking thingy site
could be flipped for millions, these guys know what it really is. The underlying
fundamentals just aren’t that interesting. The doing of it isn’t pleasurable;
I with Alan Kay. If you don’t know who he is and you work with computers, you
had better learn. He invented the GUI, OOP, Smalltalk, and bunch of other
things. He says the problem with the way things are going now is because we
think we know what we are doing. We think we know how to build systems. So we
just blindly race forward without thinking about what we are doing. We have gotten comfortable with how shitty things are and have come to accept it.
This talk was given in 1997. Towards the end of the talk pay attention to some
of things hopes to see in 10 years. (metaobject protocol for the web – websites
are objects the ability to learn how they can interact with other objects,
We have to ask ourselves, “what are we doing to more forward?” Are we finding better way or are we just running around in circles doing the same damn things. This should be asked both about our lives and programming paradigms.
Pingback: Ce se intampla cu programarea? | GanaiteD
If I may add something. What bothers me more is developers/programmers who say that learning the basics is completely unnecessary because “we have frameworks and libraries that do it all for us!”.
What you call your tangential rant may actually be at the core of the problem. I get the impression these days that many library or framework developers… uh, let me interrupt myself here, and stick to the term “framework”. Frame. That word fits well… that many framework developers really do program their stuff for a single use-case. The worst example of this that I’ve seen is UIKit, the collection of user interface APIs for iPhone OS. Yep, you can create shiny buttons with a single line of code, but making them non-shiny takes a while (slightly exaggerated)…
The problem I think lies in trying to be too smart on behalf of the framework’s users. It’s terribly tempting to do that, and I’ve been frequently guilty of doing that myself. But the best type of APIs, the ones that hold up for the longest time, are terribly, terribly stupid ones. I’m thinking of e.g. [f]open/[f]read/[f]close. There’s not much you can do better than that (again, slight exaggeration).
That simplicity and usefulness becomes harder to maintain the more stuff you’re trying to accomplish in your framework. When you get to the realm that most developers seem to work in today, the web, there’s so much for a framework to do that it seems impossible to do well (from a reusability point of view).
There are of course ways out of that, but they usually take time. And thinking of interfaces and their potential uses first is not something that seems to come natural to every developer.
unwesen, I think you’re right. The glory of open()/close()/read() is their simplicity: you can learn them in five minutes, and they will be with you for thirty years. We should all make every effort to make this true of our APIs. No, it’s not easy, when modern library (and especially, ugh, frameworks) do much more than those old system calls; but “it’s not easy” was never a valid excuse for not doing things right.
Hey Mike I’m with you on frameworks and ugly-interfaced libraries. I write embedded software (closest I can get to work on ZX Spectrum :-) and whenever we had to use — or make, as we did in one instance — a framework, it was crap.
As for libraries, I think they are fantastic whenever its interface is made of purely primitive types, and the library is optionally delivered as a DLL as well. When a library is useful, such a clean interface makes me cry. The problem you refer to is when a library requires a complex header file with strange structures that you don’t want, and which forces you to bring all kinds of .h’s and .lib’s in your project. C++ is IMO by far the worst offender, especially with its templates.
Anyway, the topic stirred up emotions that I had to vent. Thank you for providing the outlet. :-)
I agree with some of the sentiment here that if you want a pure experience, there’s nothing like doing embedded work. In that world, not only are the libraries and such sparse, you often cannot afford them, because of timing or memory constraints. You really need to optimize. And, as is true everywhere, the best optimization is done during design.
Working for a bank, I DO agree with your post.
Good old times have changed. Just make a comparison with networking:
I the good old times telco engineers developed protocols and the hardware needed to support them, which were really creative activities.
After a decade most basic needs where covered (ftp, tcp, ip, snmp, http, …)
Then telco engineers moved to be plumbers fixing impedance mismatch more than creative developers.
This is just an expectable evolution.
The creative part has moved to another level: managing complexity, designing whole systems, taking care of system evolution, protecting systems from really bad coders, etc. The problem is that, as an old days programmer, you also have to move to a position (e.g. as an architect) that lets you be creative and (sadly) you have to abandon your beloved emacs and unit tests. The good part (and challenge) is that you can use your coding experience to guide the new “plumbing” coders generation because the have no clue about the science/art of programming.
@unwesen. Yep, that’s pretty much the problem. Bad libraries suck. Good libraries don’t. What’s the difference?
Well, good libraries were there to solve a real need, by programmers who realised that others may want to do something similar. The worst are solutions in search of a problem.
Pingback: Whatever Happened To Programming? « Rubber Tyres –> Smooth Rides
Ah! Steve Yegge discusses this exact problem in “The Emacs Problem”: http://steve.yegge.googlepages.com/the-emacs-problem
His conclusion seems to be that with a homoiconic language with powerful macros, you can indeed avoid needing 16 different languages to get anything done — but that we’ve tried that before, and programmers (in general) tend to really like things like C++ and XML, despite being more work for them in the long haul.
“I’m grateful for all of it, but it just doesn’t seem like what I was promised when I followed SICP for the first time”
Pingback: 你是在重新发明轮子么？ « 辛望的开发日志
“(Again: this is not always true of all frameworks. I hope from the bottom of heart that when I start using Rails, it will prove to be among the exceptions. But my experience so far does not make me optimistic.)”
Yeah. I wrote something in Rails, and it went OK. But your lack of optimism fits, it EXACTLY fits your description of a framework. My project was not too complicated and I was already “hitting the walls” where I had trouble making my project conform to the mold of a Rails project, I’d try adding a feature one way, find it was breaking unrelated stuff, try it another way until I was still “within the framework”.
I read both your posts. Well said, every word of it.
Impedance mismatch has become a serious issue. Frameworks are just too big and too boxy.
Libraries consisting of pure static functions (along with documentation) probably are the best solution to this problem. Apache commons lang libraries are a great example.
Rails and (other frameworks – including those in other languages) are just a massive waste of your time, talent and creativity. Give me reusable functions. Not opinions. Differences could be subtle but profound.
Pingback: The radius of comprehension — in PragPub « The Reinvigorated Programmer
Pingback: DataConnect.be Blog » Blog Archive » Great article on the reinvigorated programmer blog
Pingback: Testing is not a substitute for thinking (binary search part 3) « The Reinvigorated Programmer
Seems to me that there are 2 types of readers here. One is the hardcore old-timer and the other is one who doesn’t know the first thing about computer science.
The latter will never understand your article. They could not appreciate what it means to write something that is extremely simple, fast and scalable to address a particular problem.
For them, I recommend reading a post by the author of Mailinator, who wrote how he solved a scaling issue by rolling out a very specific solution. It’s not foolproof – but it’s PERFECT for his particular problem.
Not quite nearly enough mention of CS research in the comments! Just take a look at the projects over at Microsoft Research for example — all the fun stuff is in research nowadays! “Retail” coding, where most of the corporate demand is, can’t really be challenging anymore, but it’s accessible to more people. The only problem is that guys like you, who became attracted to CS for the fun of coding, are not tempted to go into bleeding-edge research and read whitepapers all day long before laying down a line of code. I can understand this, it’s not instant gratification — but deep down it’s the same level of innovation as coding was 20 years ago.
So maybe as well as going to Ruby and Haskell, people should try reading up on some Computer Vision & Machine Learning, Statistical Analysis, Interactive Computation, Automata Theory?
Thank you thank you thank you for this article! It says exactly what has been bugging me about being told to use a framework and stop messing with the low level stuff.
I am not a professional programmer. I am an engineer first, and came by programming as a hobby in my spare time. I learned programming using C and some old DOS library – let me point out it was a *library*, not a *framework* in the modern sense of the word – It had a list of function calls, some examples, and 30 minutes looking told you all you needed to know to use it at a fundamental level. It didn’t feel like it was boxing you in, choking the life out of what you wanted to do with the actual program part of your program (which I always to this day try to keep seperate from the framework I eventually call to perform display functions).
(Actually, I first learned programming as a kid on the Apple IIE with basic, and quickbasic with DOS, but I had a break of a few years in there – I did manage a cool fractal program back then).
I oftentimes write libraries for things because I need a library for something, only to be jumped on if I ever talk about it for “reinventing the wheel”. The problem is, all the other wheels out there don’t do what I had in mind, not exactly, and not without overriding some method (where you don’t know what you just overwrote) after extending some class (which calls some 800-lb gorrila API).
Furthermore, we are entering a library of babel situation where re-inventing the wheel might take less effort (in both a real-world and information theoretic sense) than finding one that fits. Please, show me the framework for implementing a graph representation of higher-dimensional polyhedra and operating on them. Or rather, don’t – not if I have to extend something and override private functions.
Why would you want to do that? Syndrome is a human condition, not a programming one.
You know I really don’t care if I evver write an OS, but I wish my employer would at least throw me a bone once in a while instead of “Programming” apps that write reports. Is there anything a real developer hates more than writing reports?
I like your point of view, it is exactly how I feel about this subject. I started programming from BASIC and have gone through many different languages and frameworks since, I think programming as a career is done, gone, dead!
Programmer title has turned into assembly worker now, putting pieces together. You just don’t get any satisfaction out of it, sure the result is interesting but whats the point if you are not enjoying the process.
If you are getting the joy from the fast results of putting the pieces together you are not a programmer.
We have to accept that software development is a mature science now and there is no more value for the creative work, sure there is room for improvement but just like a mature person there is no more time for the fun stuff and wondering about possibilities.
Pingback: Whatever happened to programming | The Personal Blog of Artem Koval, M.Sc.
Yes, today programming mostly sucks. Libraries, tools, frameworks and languages are too often bloated, most of the time bad documented and our job appears to be more a professionnal Googler than a software engineer.
And the reason, the only reason I see is “fake infinite resources”. We live in a world threatened by a major climactic problem and we still use big cars to drive from a 300 square meter house to the big mall to buy thousands Chinese short shelf life products each year (and if they don’t most people a dreaming about living that way). Is that optimized ? Is that an intelligent solution ?
Of course it’s not. And how software industry relate to this ? It’s the same. Poor workers and exploited countries have made moore’s law real, opening wide the doors of short life bloated software. This would be totally impossible if computers were paid the real price (including the cost of the fake infinite resources), because no company would like to grow theirs systems the way they do.
The problem is not to program for business or not, web or not, finance or not. It could be fun anyway if you like to have your job well done. But it can’t be well done, it’s bloated from the start, preventing guys like you and me to feel any more satisfaction. With more limited resources everything would have been different, and the software would be more hand crafted, carefully crafted because it’s the human nature to want meaningful things well done, excepted for the few geeks who need to LOVE anything new to stay alive, and that strongly believe that our way of life and work are fine.
[Not being a native English speaker, I apologize for any reading injury]
I am in IT for over 20 years now. I don’t think programming has changed that much. The difference with 20 years ago is that there was hardly any internet, so it was only possible to learn about libraries through magazines or maybe usenet. In other words, you just had to write everything yourself. Ignorance is bliss.
Some people I work with today just live like that. They say “It is more fun to write it yourself”, and so they do. Why don’t you?
“But you wouldn’t want to have to write printf()!” Heck, I wouldn’t even want to use it!
Pingback: Most interesting links of November « The Holy Java
Pingback: coding conventions | feffi.org
I’m aware that this is an old discussion, but I’m not writing this reaction because I want to discuss it.
I just want to vent my frustration with over a decade of wrestling with frameworks and this is just about the only article that I could find that comes near to how I feel about it.
This is just how I feel about it: I won’t argue that it goes for software development in general. So, please don’t waste your time trying to point out why I’m wrong or have to ‘just deal with it’. I’ve read all the comments and have been through the arguments dozens of times: apparently there’s people who don’t mind working with frameworks; there are those who dislike them but accept them and can work with them; and there are some (like Mr. Taylor, and myself) who intensely dislike them.
The only thing that I want is to find out is whether I should quit the IT field altogether, or that I just need to find another sort of job than up to now.
Rather than statements like “Frameworks suck” or “Frameworks are brilliant”, I’m interested to know what it is with frameworks that causes my dislike – and: what is it in *me* that causes me to dislike them so much?
First, because I’ve often seen the words ‘framework’ and ‘library’ used as synonyms:
I consider something a Library if it:
– offers reusable functionality that usually performs one single task (i.e. JQuery)
– often works “out of the box”
– is (usually) written in the language that it is meant to be used with
– it’s essentially possible (though not necessary) to understand how it works using existing knowledge of the language (for the largest part)
I consider something a Framework if:
– it offers a complete structural software architecture (mvc, database abstraction, design patterns)
– it dictates a fundamental change in the way you code your program, essentially to the degree that someone cannot understand it anymore without being familiar with that framework
– it needs to be configured (using one or many additional configuration files) in order to work
– it comes with custom keywords / annotations or even its own language (i.e. HQL)
– it’s often a ‘black box’ that you cannot look inside of in order to understand how it works
– there’s a 1100-page ‘Unleashed” book about it
I’m fine with libraries. My problem is just with the frameworks.
I started out in a Oracle PL/SQL job, with some Forms and Designer work. I loved it, and there was a sense of ever-growing mastery. In ’99, Java came along, which only added to my motivation. But then, at a Sun Java event in Rotterdam in 2001, I went to a lecture about Web Services. The guy who gave it wan’t the usual IT geek talking about technicalities, but some sort of sales person. He talked about acronyms like WSDL, UDDI and SOAP in a glib way that suggested that these things were familiar to everyone; and indeed, looking around, I saw no-one looking as bewildered as I felt myself there and then. It’s not that he introduced something new and unknown: he was selling something.
Afterwards, I asked some of my colleagues whether they’d understood all of it. None admitted that they hadn’t, though I’ll be darned if they understood it any more than I did.
The difference is that they could live with a kind of “suspension of non-understanding” and just filed it away in some mental deference-cabinet labeled “maybe figure this out later on”.
There were the same people who often stressed that it was important to become familiar with Application Servers and put them on your resume. They often mentioned Bea and OAS, but at the same time I could never get from them a cut-and-dried description of what an Application Server was, what it did, and – most importantly – why they were regarded with this strange, almost religious awe.
Thus, in what was a very short period, the whole area of software development shifted and turned from a thoroughly rational, although creative craft into a mysterious, shifty and opaque world where everything depended on fashion, reputations and opinions. Instead of building on what I already knew and gradually become more skilled and versatile, it now resembled a kind of hipster contest: whoever knows the coolest frameworks and the best buzzwords wins.
And so it has ever been since then, barring the occasional method body or DB procedure that’s left to fill in. Because I still wanted to work in this field, I tried to adapt and understand whatever the job required: SOAP & UDDI, Spring, Hibernate, Struts, EJB’s (in three consecutive flavours), Ant, Maven, GWT (and a dozen more) – deployed on OAS, Tomcat, Websphere, JBoss (including the niceties of v. 7.1) & Glassfish – and, most recently, that abominable JEE-fication of PHP called Zend Framework 2.
Back then, in the early 2000’s, I felt like I was the only one who hated this. I tried to deal with it by rationalising it away, or just taking it in good humour: on one job I formulated a set of “Framework observations” (nr. 3 was “The time saved by any framework is always less than the time required to learn that framework”).
But in truth, it wasn’t very funny at all. In the past few years I have had, more than once, the sinking experience that something that I just had realised with blood, sweat, tears, hair-pulling and gnashing of teeth in three weeks, could have been done in one-tenth of the time, ten years ago.
I can’t say that I produced anything in the past, say, seven years, that I am really proud of. Not in software, at least.
And it really isn’t just me. Maybe it hurts me more on a personal level than it does with others. It’s possible. For some reason, my mind seems to actively resist absorbing knowledge related to frameworks and “abstraction layers” in general. Even when I really want to know: it’s like trying to take a dog out who doesn’t want to, and plants all four of his heels firmly into the carpet. I seem to think a lot easier “bottom-up” than “top-down”, so maybe that’s part of the problem.
But I also see so much effort go to waste: it seems that these enterprisey frameworks have become such a dogma in many a developer’s mind that the thought doesn’t even occur that maybe it could be done much simpler than to just throw another whizzbang-framework at the case at hand. One example from my recent personal experience: at a large library they’re building a storage for their digital content. It’s not a distributed system: it lives completely inside the institute’s network; ingest, analysis, storage and all. When I came on the project the first thing that struck me was that they had used Web Services for the communication between two parts of that system. Now, I won’t say that I’m any good at architecture – but, for crying out loud: web services when there’s ONE sender and ONE receiver? What kind of madness is that? (and to make it even better, they chose Websphere to implement that messaging system)
I asked everyone what the reason was for this decision. But no-one knew: I got those typically evasive answers that signify that “no, they haven’t got a clue but trust me, it’s probably a good idea because it’s complex, expensive, and someone recommended it, but I forgot who it was … oh, I have to run now … ”
On that project I was asked to create a user interface for the ingest process: it was a JEE system, running on JBoss, being built with Maven. And the UI was to be created in Google Web Toolkit: such a nice, elegant GUI framework! I suggested using Java FX, being “just java”, but that was swept off the table. GWT it was.
It was a nightmare. It took me three months to create something completely trivial. It would’ve been a joke if it wasn’t so sad. And yes, of course I should have told them “forget it, this doesn’t work” earlier. Regrettably, I’m also very stubborn, so I kept at it until it worked. The result: I just heard that they’re not going to renew my contract – basically because of that experience, even though I functioned great on other projects.
All in all, I think that I should now decide whether or not I want to keep working in software. I’d hate to leave: I know that there are programming jobs that I excel in. But with the bulk of my experience in Java, Oracle, plus some PHP, Perl, Python and whatnot, it’s not going to be easy to find a job that isn’t being enterprised to death by the ubiquitous frameworkification of software development.
Lúthien (cool name by the way),
Thank you for this long and heartbreaking comment. All I can say is (1) that I feel your pain; and (2) the jobs you dream of are out there. I think that to get back to the programming you love, you may need to to two things.
First, get out of Java. I know it’s a comfort blanket, but for whatever reason it’s an absolutely magnet for the kind of framework frenzy you despise.
Secondly, you might need to work for a smaller company (and accept the reduction in financial security that sometimes comes with this). For reason I discussed in another post, the bigger a company gets, the more it tends to rely on this kind of soul-crushing by-the-numbers approach.
I wish you the very best. If you do decide to make a leap, I hope you’ll come back and tell us about it.
I saw this story this morning and thought of this conversation.
Follow-up on the story which explains what happens and why the package was deleted. Other people have probably seen this already but worth posting if, for no other reason, that it will preserver it in case somebody comes back to the thread later.
Based on this extract, it does seem that the NPM management treated Koçulu very badly. I can understand his pique.
But the real bug here is that it’s possible to rewrite history in NPM. Fine if Koçulu wants to stop contributing new material; but what’s released needs to stay released, always. That’s a big part of the point of open-source licences.
I haven’t worked with either node.js or npm, so let’s see what they say. . . .
It is, of course confusing, but here’s what I found about Licenses.
So not everything on npm is open source. Also reading about name disputes makes it clear why npm might think that unpublishing was a good idea:
Also, consider what would happen if Koçulu updated all of his packages so they just contained a single write-line saying, “I no longer wish to participate in NPM.” He’s welcome to do so and I believe that, in that case, anybody who referenced a specific version number of left-pad would be unaffected but people who’s code referenced left-pad without a version number would automatically get the latest version and their code would stop working.
But, of course, you want people to default to the latest version so that they get bug fixes . . .
I’m a developer, but in this case, I can’t side with Koçulu. The lawyer explained that trademarks have to be defended, or they are lost. This is a specific aspect of intellectual property laws in the US.
The response was, quite literally, “fuck you”. And talking about “corporate lawyers” with a derisive tone, using it practically as an insult. This is totally immature, indicative of an ego the size of a small planet.
The reaction after that, pulling all his modules, was completely knee-jerk. “You go my way or I leave”. Also incredibly immature. That’s no way to resolve a dispute. No matter his technical abilities, this guy seems to be what some call a “diva”. Chances are that for many employers, this would be a big red flag.
The media coverage is worse. It makes him appear as some kind of demi-god whose code was so essential that removing it “broke the internet”. In reality, he happens to have written a function that right-aligns text.
That this function happens to be frequently used is a testimony to the wisdom of NPM, who provided a very neat package manager for the Node.js community. It is in no way a testimony to the technical prowess of Koçulu. The function is simple, and not particularly well written. For example, it misbehaves if its “ch” argument does not have length 1.
I haven’t read much coverage, beyond what I’ve linked here, but this seems obviously true: “It is in no way a testimony to the technical prowess of Koçulu.”
Had he written some package which was complex, well-regarded, and irreplaceable then both he and NPM would have thought about, “what happens if we pull this.” But that didn’t come up precisely because (I assume) both he and they thought of the package as trivial.
Well, Christophe, I don’t see that how good or bad a programmer Koçulu was has anything to do with this. The principle is clear: once a name has been handed out to someone, it should stay with the person unless they choose to yield it. It’s the same principle that applies with Gmail usernames, Twitter handles, Web domains and indeed Perl module names. Why NPM thought it would be a good idea to violate the simple, comprehensible, reliable principle is baffling to me. The quality of the code involved is a total irrelevance, as is the question of how mature a person Koçulu happens to be.
Mike, when I wrote “No matter his technical abilities”, I was precisely underscoring that this is not what matters. What matters here is trademark law. This may be unfortunate, but that’s not the point.
And as you said, the principle is clear, except it’s not at all what you think. Once a trademark name has been handed to a company, this company has not just a right, but actually a duty to defend it, lest they lose it. Try taking a “Coca-Cola”-derived twitter name, GMail user name, twitter handle or web domain, and see how long your principle about “first use” lasts. The web domain “coca-cola-sucks.com” is free, if you want to try ;-)
So my problem is that this developer did not know anything about trademark law (reading https://cyber.law.harvard.edu/metaschool/fisher/domain/tm.htm would have helped, notably “What constitutes trademark infringement”). And when the lawyer tried to explain, underlining that they had to actually defend their trademark or lose it, he was openly hostile and derisive in his response. In short, he really gave the lawyer no choice but to force the hand of NPM.
I don’t think NPM had any choice in this matter, actually. Again, it’s just the law.