"We will never have LCD screens - they will need too many connectors"
"Vector graphics are the future; raster graphics need too much memory"
"Full audio on computers will need too much bandwidth"
"Digital photography will never replace film"
"Moore's Law hasn't got much longer to go" (1977, 1985, 1995, 2005)
We all know this one. But often people don't understand its true effects.
Take a piece of paper, divide it in two, and write this year's date in one half:
Now divide the other half in two vertically, and write the date 18 months ago in one half:
Now divide the remaining space in half, and write the date 18 months earlier (or in other words 3 years ago) in one half:
Repeat until your pen is thicker than the space you have to divide in two:
This demonstrates that your current computer is more powerful than all other computers you have had put together (and the original Macintosh (1984) had tiny amounts of computing power available.)
In the 1980's the most powerful machines were Crays
And people used to say "One day we will all have a Cray on our desks!"
Sure: in fact current workstations are about 120 Craysworth.
Even my previous mobile phone was 35 Craysworth...
Just as a side issue, LED's are transistors too, and also follow Moore's Law, lumens are increasing exponentially, prices are dropping.
That's why we have those tiny, dirt cheap, bike lights now.
One day, soonish, all lighting will be using LEDs... (This is a good example of a disruptive technology)
And have you noticed how LCD screens have almost entirely replaced tube TVs?
(This is also a good example of disruptive technology)
LCD screens also contain transistors, so you can predict that screens are going to get higher-density and cheaper.
What is less well-known is that bandwidth is also growing exponentially at constant cost, but the doubling time is 1 year!
(Actually 10½ months according recently to an executive of one of the larger suppliers)
Put another way, in 7 years we could have 1 Gigabit connections to the home.
Metcalf proposes that the value of a network is proportional to the square of the number of nodes.
v(n)=n2
Simple maths shows that if you split a network into two, it halves the total value:
(n/2)2 + (n/2)2 = n2/4 + n2/4 = n2/2
This is why it is good that there is only one email network, and bad that there are so many Instant Messenger networks. It is why it is good that there is only one World Wide Web.
Proposed in an article in Emerce as the result of an interview with me:
Every 12½ years computers become powerful enough to allow the use of a new generation of programming languages that give an order of magnitude more productivity to the programmer.
(In other words, what used to take you a week, would now take a half day).
The term Web 2.0 was invented by a book publisher (O'Reilly) as a term to build a series of conferences around.
It conceptualises the idea of Web sites that gain value by their users adding data to them, such as Wikipedia, Facebook, Flickr, ...
But the concept existed before the term: Ebay was already Web 2.0 in the era of Web 1.0.
By putting a lot of work into a website, you commit yourself to it, and lock yourself into their data formats too.
This is similar to data lock-in when you use a proprietary program. You commit yourself and lock yourself in. Moving comes at great cost. Try installing a new server, or different Wiki software.
This was one of the justifications for creating XML: it reduces the possibility of data lock-in, and having a standard representation for data helps using the same data in different ways too.
As an example, if you commit to a particular photo-sharing website, you upload thousands of photos, tagging extensively, and then a better site comes along. What do you do?
How about if the site you have chosen closes down (as has happened with some Web 2.0 music sites): all your work is lost.
How do you decide which social networking site to join? Do you join several and repeat the work? I am currently being bombarded by emails from networking sites (LinkedIn, Dopplr, Plaxo, Facebook, MySpace, Hyves, Spock...) telling me that someone wants to be my friend, or business contact.
How about geneology sites? You choose one and spend months creating your family tree. The site then spots similar people in your tree on other trees, and suggests you get together. But suppose a really important tree is on another site?
These are all examples of Metcalf's law.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
What should really happen is that you have a personal Website, with your photos, your family tree, your business details, and aggregators then turn this into added value by finding the links across the whole web.
Firstly and principally, machine readable Web pages.
When an aggregator comes to your Website, it should be able to see that this page represents (a part of) your family tree, and so on.
One of the technologies that can make this happen has the catchy name of RDFa
You could describe it as a CSS for meaning: it allows you to add a small layer of markup to your page that adds machine-readable semantics.
It allows you to say "This is a date", "This is a place", "This is a person", and uniquely identify them on your web page.
If a page has machine-understandable semantics, you can do lots more with it.
So rather than putting all your data on someone else's website, and the fact that it is there implying a certain semantics, you should put your own data on your own website with explicit semantics.
Then you get the true web-effect, with its full Metcalf value.
It doesn't really matter, because on the whole Websites are interoperable.
I am particularly charmed by this sort of device:
It is a wireless router containing network storage and a music server for in your house, while offering FTP and a Webserver for outside, plus a Bittorrent server. So you can switch off all your machines, and still serve webpages to the outside world.
Web 2.0 is damaging to the Web by dividing it into topical sub-webs.
With machine-readable pages, we don't need those separate websites, but can reclaim our data, and still get the value.
The original web was easy to create a website for
The second most important property of the web!
Response times: 0.1 sec 'instantaneous'
1 sec acceptable
10 sec unacceptable
We will all need it sooner or later
This should be self-evident
It's amazing really how much we can achieve with the tools we have
Much of the simplicity has gone. It is no longer easy to create a website.
I have had reports of companies losing programmers to nervous breakdowns once JavaScript programs exceed a certain size.
If you go to a talk by Jesse James Garrett, the man who coined the word Ajax, you will be struck that his emphasis on the value of Ajax is on Usability.
Ajax above all reduces latency, since the same functionaility is achievable without it
Harder to achieve
Also harder to achieve
Too hard to program
Loss of structure, accessibility, device independence
Have to write your application several times for different devices
CSS beginning to show its age, not meeting modern needs
XBL
SVG
XForms
Declarative programming
The XML Binding Language
Structured Vector Graphics
Already present on many phones, and in several browers
Despite its name, a declarative constraint-based processing engine.
It uses an MVC model
The controls are abstract, intent-based, which then can bind to actual controls
This means that XForms is accessible out of the box
It also means that it is very platform-independent, and there are good examples of this in practice.
According to the DoD, 90% of the cost of software is debugging.
According to Fred Brookes, in his classic book The Mythical Man Month, the number of bugs increases quadratically according to code size: L1.5.
In other words, a program that is 10 times longer is 32 times harder to write.
Or put another way: a program that is 10 times smaller needs only 3% of the effort.
The problem is, no one writes applications except programmers.
Interesting exception: spreadsheets
Mostly because they use a declarative programming model.
The nice part about declarative programming is that the computer takes care of all the boring fiddly detail.
Some of the most interesting work in this area
is being done by xport.net with their Sidewinder rich web browser.
What they have done is combined XHTML, XForms, SVG and XBL. The SVG is essentially a stylesheet for XHTML+XForms content, being applied using XBL. For instance:
The code says:
<xf:output value="..." appearance="fp:analogue-clock" class="clock">
The output is then something like 11:30:00, and the SVG turns this into an analogue clock (the XBL keys off the 'appearance' attribute).
Although the example shown above is not quite complete, it does more than Google maps does and yet it is only 25Kbytes of code (instead of the 200+K of Javascript).
Remember, empirically, a program that is an order of magnitude smaller needs only 3% of the effort to build.
A certain company makes BIG machines (walk in): user interface is very demanding — needs 5 years, 30 people
This became: 1 year, 10 people with XForms
The advantages of this approach are:
In other words: everything you need for the web!