You still hear a lot of talk about Web 2.0. You might even call it a movement. It’s exciting to a lot of people, but I am troubled by it. It strikes me as a very rigid form of constrained thinking, coloring in the lines. If you insist on buzzwords, I’m not so much interested in Web 2.0 as I am in Net 2.0.
We seem to have forgotten that the Web is only one application of the Internet, or Net for short. There was a time when the Web didn’t exist, when we used the Net for other things. Telnet (and its modern avatar SSH), FTP, Email, News, and IRC come to mind. These are still around, but to varying degrees have lost the importance they once had. Email is still going strong, but there are competitors starting to chip away at it making its future dominance unclear. The same will be true for the Web some day. Today, the Web certainly dominates the Net in the amount of time spent using it, even if it doesn’t dominate number of bytes transferred. Its dominance is reflected in the fact that we tend to conflate the terms Net and Web. However, there’s good reason to believe that this will not be the case in 20 years or so. The Net has changed before, and it will change again. The Web is not the Net. Who will be the next Gopher?
I should point out, the inventor of the Web, Tim Berners Lee, has some thoughts on the difference between the Web and the Net himself. He foresees the need for a Semantic Web, which he would rather be called a “Giant Global Graph”. I’m not here to disagree with him, I’m merely trying to suggest we need to open our eyes as wide as possible. In fact, I do believe the Semantic Web, with a more computer centric focus rather than human focus, will be an area that will provide us with some large advances in computing. But it is not the horizon, it is only another step along the way.
The Web was designed to deliver us a “web” of hyperlinked documents, containing text and images, each with an address. It does this quite well. Originally Tim Berners Lee conceived of a more write as well as read medium, which is what I believe the original intent of the HTTP POST verb was. The Web never really got this “produce as well as consume” spirit, until wiki’s came around. Wikipedia showed what you could accomplish with true read and write collaboration of hyperlinked documents. It would not be possible to replicate the success of Wikipedia without its technological underpinning of the wiki augmented Web. When used appropriately, the Web is a great medium to both produce and consume content on.
The constrained thinking of the Web is about to take an interesting direction to a logical extreme in Google’s Chromium OS. Rather than just a browser built with Web technologies, it is the entire user interface of an Operating System (in this case, a Linux kernel) built with Web technologies. What happens when you plug a scanner into a computer running the Chromium OS? I’m sure Google will solve this “problem” eventually, but we live in a world where plugging a scanner into a computer isn’t a “problem”. The amount of wheels that need reinventing to build an all-encompassing “Web” UI seem counter-productive to me. The mental effort would be better spent on unsolved problems, such as better natural language processing in UI’s for instance.
What if we agreed we didn’t have to make everything a web page? What if we were designing a collaboration medium from scratch on the Net? What if we removed all the limits? You might consider that such a medium would be a full 3-d environment with graphics and sound, in a deliberate mimic of the real world. Such media exist on the net, commercially with Second Life, in research with Croquet. These media can display documents, just like the web can, and can address them too, but aren’t limited to documents alone. These media are not the pinnacle of what we might use the Net for. But they do show us that there we can use the Net for things beyond just email and hyperlinked documents.
Skeptics to new Net technologies may point out that the Browser has become a universal consumer for Net content, and that any new medium on the Neb not viewable in a browser is a non-starter. In the days of slow internet connections, when it wasn’t possible or easy to download a large client program quickly, this was true. Today however, with fast broadband connections in the home more the norm, if your content is good enough users will download the needed client. Second Life and Croquet require downloads. Games, which are increasingly social on-line applications these days, require downloads or purchasing a box in the store. Some games are so addictive that people actually pay real money to download their large fat clients. Your new idea will be the same, it doesn’t have to be shoehorned into “the Browser”.
Despite it’s faults, it’s obvious to see why the Web gets so much attention. It’s made some people very rich, and made many others a comfortable living, myself included. So long as this is true, people will continue to develop for and use the Web. As is so often the case, money trumps technical concerns. Even today, I see so many small companies trying to re-inflate the “.com” bubble, believing that any idea will sell merely because it is on “the Web”. But in the increasingly crowded space of today’s Web, the crowding brings in inherent competition which means making money is far from guaranteed. Some day, a new killer application will appear that doesn’t use the Web, and the innovator will make a fortune as a result. The copycats will come in, and the cycle will repeat.
I have no doubt that the Web will improve as time goes on. HTML5 and Web sockets will certainly help a great deal and I am looking forward to using them. But do we still have to see everything through the lens of a markup language and a stateless textual transfer protocol? A little revolution every now and then is a good thing.