Programming Workflow

Programmers are much enamored with their programming languages. Some so much so that it often defines what programmers are; you may hear “I am a Java programmer”, for instance. This runs deeper than we think. Often without realizing it, we are tied not only to the language but also its implementation. The implementation, rather than the language itself, imposes a workflow on how we write programs in the language, one that we usually do not see past.

For some time now, the most popular workflow of mainstream languages has been the familiar “edit, compile, run” (ECR) cycle. It has become so ingrained in many of us that we have forgotten other workflows, or worse never learned that any others exist. It was not always so. Popular languages of the past and present have included a different workflow, the “read, eval, print loop” (REPL). For myself, my first programming language was BASIC. My first programming prompt was a REPL.

Often when judging the merits of a language, we focus on the features of the language itself, rather than its implementation and workflows. This is dangerous, since the features of the language are only a part of what makes a programmer productive. The workflow of how you interact with a language is as much important as the language itself. Since many of us are used to only one workflow, more often than not that venerable “edit, compile, run” cycle, we have stopped looking for innovation in how we interact with programming languages.

With ECR, everything is a program. In order to get the computer to do something, you must edit a whole program (whether new or a modified old one), compile it, run it, rinse, lather, and repeat. This is a lot of effort for tasks that are very small. This is also a lot of effort when you need to experiment or tweak with small bits of code. Sometimes you only want to know the output of one function for one input. Sometimes you just need to get the base64 encoding of a string.

A REPL allows you to interact with the language one statement at a time. You can interact out of nothing, if you are just getting started on an idea, or you can interact with an existing program, one statement at a time. Feedback is usually instant. Writing a program with a REPL still means writing a program. But if you need something done that only requires one statement, then one statement is all you need. You cannot use just one statement with ECR. Small tasks become difficult with ECR, so more often than not they get skipped.

REPLs by their nature are interpreters. Because of this, they are typically only provided with interpreted languages. But interpretation versus compilation is merely an implementation detail. All languages can be interpreted. Thus all languages could in principle offer a REPL. REPLs having a long tradition, heralding back to LISP and BASIC, both old languages. But with the rise of compiled languages, primarily C, C++, and Java, the REPL was largely forgotten by a generation of programmers. The “dynamic languages” such as Perl, Python, Ruby, and Javascript have brought a resurgence of the REPL, but not all programmers have explored the dynamic language world.

There’s no reason to not ship a REPL with modern compiled languages such as C# or Java. Mono in fact ships C# REPL, and BeanShell exists for Java. But merely providing a REPL is no guarantee that the masses will use it. Long exposure to the ECR for a generation of programmers has lulled us to sleep. Programmers used to the ECR in compiled languages are often amazed at REPL demos in their favorite language 1, but immediately go back to ingrained habits. We are too enamored with our tools, and the mindsets they put us into.

The natural question that a Software Futurist might ask is, is there anything available more advanced than the REPL? The answer is yes. There is a slight variant of the REPL, the “notebook” interface. This has long been a staple of mathematical systems, like Mathematica, but has been working its way into other languages like Python. But I consider the notebook an only marginal improvement to the basic REPL. For a more fundamental paradigm shift, as we often do we have to look to the past rather than the future – Smalltalk.

Smalltalk implementations offer what is called “the Image“. It is a fundamentally different paradigm than ECR or the REPL. With the Image, you start a running instance of a complete environment. The Image contains the live objects of your program, as well as the development environment. You make changes to these live objects, or make new live objects, interactively with the development environment. This can be done textually, or graphically depending on what tools you choose to use and what is available at the time. Think of it as “sculpting” a collection of live objects. Once the live objects have reached a point you consider ready, you can freeze the Image, copy it and deploy it wherever you like. In Smalltalk, you work primarily with objects, rather than primarily with text.

So far as I know, this interaction paradigm is unique to smalltalk. Dan Ingalls, one of the creators of Smalltalk, has attempted to bring it to the Javascript language in a project called the Lively Kernel. It is a laudable effort, but sadly few seem to be paying attention. Just as we have been lulled to sleep by ECR, the current generation seems to have been lulled to sleep by the REPL.

It remains to be seen if Image based programming is an advance on the REPL. Certainly it is essentially unknown outside the Smalltalk world, itself a fringe language by today’s standards. It is likely hampered by it’s all or nothing nature; the entire development environment must be implemented in Smalltalk, inside the Image. Entire ecosystems of supposedly language agnostic tools are thus unable to interact with it – source control tools, documentation generators, indexers. But as least the Image is a creative idea, one designed after the lessons of both ECR and the REPL. I haven’t seen such originality in other interaction paradigms. It’s originality that’s desperately needed if we are to design tools that will make any real leaps in productivity in the future.

Brett Victor gave a great talk talk that touched on this at CUSEC 2012. His guiding principle is the requirement of immediate feedback. He rightly points out that the REPL is a recreation of the teletype, a now obsolete relic from the 1970’s. He gives demonstrations of several javascript programming environments of his own creation, all of which allow you to vary parameters as you edit code and see the results immediately.

Subtext is another post-REPL environment, where code and data are edited in an environment with immediate feedback. He states in his manifesto “The language is the IDE” (emphasis mine)

Tags: , ,

Tuesday, January 31st, 2012 Essays No Comments

The Web is not the Net

You still hear a lot of talk about Web 2.0. You might even call it a movement. It’s exciting to a lot of people, but I am troubled by it. It strikes me as a very rigid form of constrained thinking, coloring in the lines. If you insist on buzzwords, I’m not so much interested in Web 2.0 as I am in Net 2.0.

We seem to have forgotten that the Web is only one application of the Internet, or Net for short. There was a time when the Web didn’t exist, when we used the Net for other things. Telnet (and its modern avatar SSH), FTP, Email, News, and IRC come to mind. These are still around, but to varying degrees have lost the importance they once had. Email is still going strong, but there are competitors starting to chip away at it making its future dominance unclear. The same will be true for the Web some day. Today, the Web certainly dominates the Net in the amount of time spent using it, even if it doesn’t dominate number of bytes transferred. Its dominance is reflected in the fact that we tend to conflate the terms Net and Web. However, there’s good reason to believe that this will not be the case in 20 years or so. The Net has changed before, and it will change again. The Web is not the Net. Who will be the next Gopher?

I should point out, the inventor of the Web, Tim Berners Lee, has some thoughts on the difference between the Web and the Net himself. He foresees the need for a Semantic Web, which he would rather be called a “Giant Global Graph”. I’m not here to disagree with him, I’m merely trying to suggest we need to open our eyes as wide as possible. In fact, I do believe the Semantic Web, with a more computer centric focus rather than human focus, will be an area that will provide us with some large advances in computing. But it is not the horizon, it is only another step along the way.

The Web was designed to deliver us a “web” of hyperlinked documents, containing text and images, each with an address. It does this quite well. Originally Tim Berners Lee conceived of a more write as well as read medium, which is what I believe the original intent of the HTTP POST verb was. The Web never really got this “produce as well as consume” spirit, until wiki’s came around. Wikipedia showed what you could accomplish with true read and write collaboration of hyperlinked documents. It would not be possible to replicate the success of Wikipedia without its technological underpinning of the wiki augmented Web. When used appropriately, the Web is a great medium to both produce and consume content on.

But is the Web the pinnacle of what the Net can do? Are there things the Web doesn’t do well?  In the beginning there were HTML and HTTP. As people tried to use the Web for newer things the boundaries of what was possible made themselves felt very sharply. A litany of supporting technologies emerged: Cookies, Javascript, CSS, Keep Alives, XmlHttpRequest, and very recently Web Sockets and HTML5. The incoherent result can only be politely called a hack. This isn’t even counting the separate closed technologies such as Flash and Silverlight that live within Web pages yet are from a technological standpoint only strange guests, living in HTTP requests but only participating in the Web stack indirectly. Adding to the difficulty is that your platform, the Web “Browser”, is usually implemented incompatibly in the most insidious of ways. To develop for the Net these days usually means being a “web developer”, and it is sadly often a painful experience as a result.

The web was intended for documents, but often we use it for applications. Gmail and Turbotax are examples of applications, not documents. Yet they are built with Web technologies, with their document centric focus. This technological mismatch makes itself painfully felt in many ways. The phrase “don’t hit your browser’s back button” is laughable but still all too common symptom of this mismatch. There are protocols designed to build applications that can be consumed remotely over the Net. X is one such example. X is far from ideal in many circumstances (it is famously hard to develop for at the low level, much like the Web sadly), but at least it was designed with a larger world in mind than HTML and HTTP. New thinking is sorely needed in this area. There are attempts to make developing Web applications easier, such as the Google Web Toolkit (and a Python derivative, Pyjamas), but these are still built on a HTML/HTTP foundation, with all of the limitations inherent to them. More interesting is the Lively Kernel, Dan Ingall’s attempt to preach the Smalltalk gospel to the Web generation using Javascript. Sadly, just like Smalltalk these days, even with the new Javascript clothes not many people seem to be paying much attention.

The constrained thinking of the Web is about to take an interesting direction to a logical extreme in Google’s Chromium OS. Rather than just a browser built with Web technologies, it is the entire user interface of an Operating System (in this case, a Linux kernel) built with Web technologies. What happens when you plug a scanner into a computer running the Chromium OS? I’m sure Google will solve this “problem” eventually, but we live in a world where plugging a scanner into a computer isn’t a “problem”. The amount of wheels that need reinventing to build an all-encompassing “Web” UI seem counter-productive to me. The mental effort would be better spent on unsolved problems, such as better natural language processing in UI’s for instance.

What if we agreed we didn’t have to make everything a web page? What if we were designing a collaboration medium from scratch on the Net? What if we removed all the limits? You might consider that such a medium would be a full 3-d environment with graphics and sound, in a deliberate mimic of the real world. Such media exist on the net, commercially with Second Life, in research with Croquet. These media can display documents, just like the web can, and can address them too, but aren’t limited to documents alone. These media are not the pinnacle of what we might use the Net for. But they do show us that there we can use the Net for things beyond just email and hyperlinked documents.

Skeptics to new Net technologies may point out that the Browser has become a universal consumer for Net content, and that any new medium on the Neb not viewable in a browser is a non-starter. In the days of slow internet connections, when it wasn’t possible or easy to download a large client program quickly, this was true. Today however, with fast broadband connections in the home more the norm, if your content is good enough users will download the needed client. Second Life and Croquet require downloads. Games, which are increasingly social on-line applications these days, require downloads or purchasing a box in the store. Some games are so addictive that people actually pay real money to download their large fat clients. Your new idea will be the same, it doesn’t have to be shoehorned into “the Browser”.

Despite it’s faults, it’s obvious to see why the Web gets so much attention. It’s made some people very rich, and made many others a comfortable living, myself included. So long as this is true, people will continue to develop for and use the Web. As is so often the case, money trumps technical concerns. Even today, I see so many small companies trying to re-inflate the “.com” bubble, believing that any idea will sell merely because it is on “the Web”. But in the increasingly crowded space of today’s Web, the crowding brings in inherent competition which means making money is far from guaranteed. Some day, a new killer application will appear that doesn’t use the Web, and the innovator will make a fortune as a result. The copycats will come in, and the cycle will repeat.

I have no doubt that the Web will improve as time goes on. HTML5 and Web sockets will certainly help a great deal and I am looking forward to using them. But do we still have to see everything through the lens of a markup language and a stateless textual transfer protocol? A little revolution every now and then is a good thing.


Friday, February 12th, 2010 Essays 3 Comments



Computer. n. A device for saving human labor.

Often computer science literature talks about what and how, but not so much about why. Computers are meant to make our lives easier. If they fail in this task, we are better off burning the lot of them.

When viewed as a whole, computers have saved us countless volumes of time and enabled the human mind to achieve feats previously thought impossible. Even more importantly they have allowed us to achieve feats previously not even imagined.

However, the picture is not always this rosy. For many of us, much of our day is spent on front of the computer. The computer should be serving us, but all too often we feel like we are serving the computer. The computer should always be saving work for us but often it creates work for us. We throw up our hands and assume this is the way of things.

It doesn’t need to be this way. The computer is our creation and when it isn’t serving our needs we must step back and figure out how to make it serve our needs.

We must remember that computers are only a means to an end, they are not the end in themselves. Edsger Dijkstra famously summed this up as “Computer science is no more about computers than astronomy is about telescopes”. We have become to complacent in our view of computers. Some of us do not remember a time before computers, and assume that the form they take and the way they operate isn’t open to change. We must remember the computer should adapt to serve us, and that everything about it is open to change.

Much industrial computer science these days amounts to a “paint by the numbers” approach. We follow methodologies blindly, only rating ourselves on how closely we follow the method, rather than analyzing how well we are achieving our real goals. Good software engineering is difficult, so we use methodologies as crutches to make us think we are treading correctly when in reality we are blindly going down the wrong road. Software engineering is a creative endeavor. No amount of fixed methodologies will change that. To achieve something novel and useful creative thinking is required. The computer has no creative powers. By removing the creative element from software engineering we are removing the possibility of real advances in the craft.

How then are we as programmers going to make computers improve our lives? To start we must never lose sight of the why: save us labor. Saving us labor saves us time. And the old adage tells us what time is: if you’re good at making the computer save time, you can make a lot of money. Money is a powerful motivator, but it’s not the only motivator. More so than money, saving us time frees us up for other pursuits. The computer liberates us, by doing the grunt work it liberates us to do the creative work. And since computer science is as much an art as a science, you could say computer science is a liberal art.

These of course are reasons why we must sharpen our skills as computer programmers and computer scientists. Well then, how are we to go about it? There’s no one single way, good old creativity and ingenuity will have to power our way. Let’s explore how in upcoming writings, but never forgetting why.

Tags: ,

Sunday, August 2nd, 2009 Essays No Comments