"We will never have LCD screens - they will need too many connectors"
"Vector graphics are the future; raster graphics need too much memory"
"Full audio on computers will need too much bandwidth"
"Digital photography will never replace film"
"Moore's Law hasn't got much longer to go" (1977, 1985, 1995, 2005)
We all know this one. But often people don't understand its true effects.
Take a piece of paper, divide it in two, and write this year's date in one half:
Now divide the other half in two vertically, and write the date 18 months ago in one half:
Now divide the remaining space in half, and write the date 18 months earlier (or in other words 3 years ago) in one half:
Repeat until your pen is thicker than the space you have to divide in two:
This demonstrates that your current computer is more powerful than all other computers you have had put together (and the original Macintosh (1984) had tiny amounts of computing power available.)
In the 1980's the most powerful machines were Crays
And people used to say "One day we will all have a Cray on our desks!"
Sure: in fact current workstations are about 120 Craysworth.
Even my previous mobile phone was 35 Craysworth...
Just as a side issue, LED's are transistors too, and also follow Moore's Law, lumens are increasing exponentially, prices are dropping.
That's why we have those tiny, dirt cheap, bike lights now.
One day, soonish, all lighting will be using LEDs...
(This is a good example of a disruptive technology)
And have you noticed how LCD screens have almost entirely replaced tube TVs?
LCD screens also contain transistors, so you can predict that screens are going to get higher-density and cheaper.
(This is also a good example of disruptive technology)
What is less well-known is that bandwidth is also growing exponentially at constant cost, but the doubling time is 1 year!
(Actually 10½ months according recently to an executive of one of the larger suppliers)
Put another way, in 7 years we could have 1 Gigabit connections to the home.
Metcalf proposes that the value of a network is proportional to the square of the number of nodes.
v(n)=n2
Simple maths shows that if you split a network into two, it halves the total value:
(n/2)2 + (n/2)2 = n2/4 + n2/4 = n2/2
This is why it is good that there is only one email network, and bad that there are so many Instant Messenger networks. It is why it is good that there is only one World Wide Web.
The term Web 2.0 was invented by a book publisher (O'Reilly) as a term to build a series of conferences around.
It conceptualises the idea of Web sites that gain value by their users adding data to them, such as Wikipedia, Facebook, Flickr, ...
But the concept existed before the term: Ebay was already Web 2.0 in the era of Web 1.0.
By putting a lot of work into a website, you commit yourself to it, and lock yourself into their data formats too.
This is similar to data lock-in with software: when you use a proprietary program you commit yourself and lock yourself in. Moving comes at great cost.
This was one of the justifications for creating XML: it reduces the possibility of data lock-in, and having a standard representation for data helps using the same data in different ways too.
But there is no standard way of getting your data out of one Web 2.0 site to get it into another.
As an example, if you commit to a particular photo-sharing website, you upload thousands of photos, tagging extensively, and then a better site comes along. What do you do?
How do you decide which social networking site to join? Do you join several and repeat the work? I am currently being bombarded by emails from networking sites (LinkedIn, Dopplr, Plaxo, Facebook, MySpace, Hyves, Spock...) telling me that someone wants to be my friend, or business contact.
How about geneology sites? You choose one and spend months creating your
family tree. The site then spots similar people in your tree on other trees,
and suggests you get together. But suppose a really important tree is on
another site?
How about if the site you have chosen closes down: all your work is lost.
This happened with MP3.com for instance. And Stage6.
How about if your account gets closed down? There was someone whose Google account got hacked, and so the account got closed down. Four years of email lost, no calendar, no Orkut.
Here is someone whose Facebook account got closed. Why? Because he was trying to download all the email addresses of his friends into Outlook.
These are all examples of Metcalf's law.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
What should really happen is that you have a personal Website, with your photos, your family tree, your business details, and aggregators then turn this into added value by finding the links across the whole web.
Firstly and principally, machine readable Web pages.
When an aggregator comes to your Website, it should be able to see that this page represents (a part of) your family tree, and so on.
One of the technologies that can make this happen has the catchy name of RDFa.
You could describe it as a CSS for meaning: it allows you to add a small layer of markup to your page that adds machine-readable semantics.
It allows you to say "This is a date", "This is a place", "This is a person", and uniquely identify them on your web page.
Comparable to microformats, but then generalised.
If a page has machine-understandable semantics, you can do lots more with it.
So rather than putting all your data on someone else's website, and the fact that it is there implying a certain semantics, you should put your own data on your own website with explicit semantics.
Then you get the true web-effect, with its full Metcalf value.
It doesn't really matter, because on the whole Websites are largely interoperable, but I am particularly charmed by this sort of device:
They are wireless routers containing network storage and a media server for in your house, while offering FTP and a world-class Webserver for outside. So you can switch off all your machines, and still serve webpages to the outside world, with rather low energy use.
Web 2.0 is damaging to the Web by dividing it into topical sub-webs.
With machine-readable pages, we don't need those separate websites, but can reclaim our data, and still get the value.
Web 3.0 sites will then aggregate data from the web, and in so doing add value that will attract users.