Tutee of the man who built the first transistor computer.
Now a researcher at CWI, Amsterdam
Co-designed programming language that Python is based on.
1st user of European Open Internet (1988)
Built a 'browser' in the late 80's
Organised workshops at first web conference in 1994
Co-designed CSS, HTML, ODF and more
"Copied by E. Grosser, Esqr, from an Ancient Drawing said to have been made by LIVENS, a Disciple of Rembrant. London Pub May 1790, by E. Harding, No 132 Fleet Street".
Might this be true? Lievens was in England in the 1630's.
Thanks to the Semantic Web and europeana.eu I could answer that
question:
The third printing press in England was set up here in 1479
The third printing press in England was set up here in 1479
Printed on that press. Note how it imitates a manuscript.
Until the introduction of printing, books were rare, and very, very expensive, maybe something like the same price as a small farm.
Only very rich people, and rich institutions, owned books.
The first Universities were set up before printing; to borrow a book, a student had to copy it as payment. Usually book lenders only lent you part of the book at a time, to speed up the copying.
In 1424 The University of Cambridge had one of the largest libraries in Europe: 122 books.
The other producers of books were the monasteries.
"When the Anglo-Saxon Monkwearmouth-Jarrow Abbey planned to create three copies of the bible in 692—of which one survives—the first step necessary was to plan to breed the cattle to supply the 1,600 calves to give the skin for the vellum required." http://en.wikipedia.org/wiki/Medieval_art
Gutenberg combined known technologies: ink, paper, wine presses, movable type.
By 1500: 1000 printing shops, had produced 35,000 titles in 20 million copies.
Price of books tumbled (First bible 300 florins, about 3 years wages for a clerk).
Books became a new means of distribution of information.
It was a paradigm shift - new industries, paper production, binders, publishers, bookshops, newspapers.
People had a reason to learn to read.
Printing enabled the rise of Protestantism, and the Enlightenment is ascribed to the availability of books.
Up until then, all information had been in the hands of the church (even universities were primarily religious institutions run by the church).
The church and state instituted censorship, to try and control information. Writers were killed or emprisoned for saying things that the church didn't like. (Such as Galileo for saying that the Earth moved)
Consequently many thinkers relocated to get out of the reach of the church.
"The twin occurrences -- that the city became a hub for scientists, and that it became the centre of publishing -- fed one another, resulting in the astounding fact that, over the course of the 17th century, approximately one-third of all books published in the entire world were produced in Amsterdam" - Russell Shorto
1665: first scientific journals French Journal des Sçavans and the British Philosophical Transactions
From then on the number of scientific journals doubled every 15 years, right into the 20th century.
Even as late as the 1970's if you had said "there has to come a new way of distributing information to support this growth", they would have thought you crazy, more likely expecting the growth to end.
(Source: Little Science, Big Science, Derek J. De Solla Price)
We knew it was planned
17 November 1988
64kb/s
A year later: 128kb/s.
And yet since then speeds have effectively doubled per year.
By 1997, when AMSIX was created, it was running at 2Mb/s.
(This is of course less than you have at home now.)
These are values I could track down (usage, not capacity by the way, and only the measured usage at that)
Currently peaking at 6Tb/s (Tera is a million million)
My home bandwidth has similarly been doubling over the years.
The first thing I did on that first day of internet was log in to a computer in New York. It cost no more than logging in to a computer in Amsterdam
Before 1988, phoning long-distance was expensive.
The further you phoned, the more expensive it was. This matched people's expectations, but didn't match reality.
In fact, the expensive part is the local loop: only one person (you) is using that. The long-distance part is amortised over 1000's of calls.
The internet made this all to clear.
Once the internet got fast enough (doubling per year), it started being possible to do voice calls over the internet.
There are now dozens of programs where you can voice call, and video call.
Essentially, phone calls have become better and free with the right infrastructure.
Luckily, the internet isn't owned by anyone, so the telephone companies have had to sit by and watch their golden goose be slaughtered, and have been unable to do anything about it... There's no one to sue!
(Mobile providers are still overcharging us for international calls though)
The coming of the internet enabled the Web.
Tim Berners-Lee (and Robert Caillau) created the Web at CERN. First server 1991.
They brought together existing technologies (Hypertext, the internet, MIME types) and created a cohesive whole.
The Web is now replacing books and many other things.
Telephone directories, encyclopaedias, train timetables, other reference works are already gone. Others will follow.
Books (as an artefact) will become a niche market. All information will be internet-based.
Just as the internet showed us the true cost of communication, the web will show us the true cost of content.
To publish information in a book you need an expensive infrastructure: paper manufacture, printing presses, distribution channels, advertising, bookshops.
When you buy a book, the infrastructure consumes most of the price: typically the producer of the information (the author) gets 10%.
But with the internet, you no longer need that infrastructure; anyone can publish, even from home.
The book made everyone a reader.
The internet can make everyone a publisher.
Unfortunately, the first successful web browser, Mosaic, left out the publishing part, and only allowed you to read web pages.
And so we got "Web 2.0".
The term Web 2.0 was invented by a book publisher (O'Reilly) as a term to build a series of conferences around.
It conceptualises the idea of Web sites that gain value by their users adding data to them, such as Wikipedia, Facebook, Flickr, ...
By putting a lot of work into a website, you commit yourself to it, and lock yourself in to their data formats too.
This is similar to data lock-in with software: when you use a proprietary program you commit yourself and lock yourself in. Moving comes at great cost.
Similarly, there is no standard way of getting your data out of one Web 2.0 site to get it into another.
As an example, if you commit to a particular photo-sharing website, you upload thousands of photos, tagging extensively, and then a better site comes along. What do you do?
How do you decide which social networking site to join? Do you join several and repeat the work?
How about geneology sites? You choose one and spend months creating your family tree. The site then spots similar people in your tree on other trees, and suggests you get together. But suppose a really important tree is on another site?
Luckily email got standardised before companies could get their hands on
it.
How about if your chosen site closes down: all your work is lost.
This happened with MP3.com for instance. And Stage6. And Pownce. And Ficlets. And Jaiku. And Orkut. And Google Video. And Google+.
How about if your account gets closed down?
There was someone whose Google account got hacked, and so the account got closed down. Four years of email lost, no calendar, no social network.
There was someone whose Facebook account got closed because he was trying to download all the email addresses of his friends into Outlook.
Or the woman whose account was closed for the crime of posting a photo of her breastfeeding.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
What should really happen is that you have a personal Website, with your photos, your family tree, your business details, and aggregators then turn this into added value by finding the links across the whole web.
All there needs is a way of identifying who is trying to look at your content, and we can replicate Facebook without anyone owning your data.
All those passwords!
Your computer knows it is you (you've used a password or whatever to get in).
Solution: Use public key cryptography at a low level to log you in.
Everyone has two matched keys:
I lock a message with my private key (so it can only be opened with my public key).
I send the locked message to you.
You get a copy of my public key: if it opens the message, you know it was really from me.
No more spam!
You lock a message with my public key (so it can only be opened with my private key).
You send me the locked message: you know that only I can open it to read it.
Combine those two things:
I lock a message:
I send it to you:
We can use the same process instead of passwords when logging in to websites.
When you register with a site, your browser gives them your public key, and takes a copy of their public key.
When you log in, the site sends a locked secure message (as above) to your browser, and asks your browser to tell it what it says.
Your browser knows it is really from the site (and not someone pretending); the site knows only you can read it.
If the browser sends back the correct message, the site lets you in; without typing in a password!
Unfortunately, something like this was in the original design of the internet, but was too expensive to implement then.
Even more unfortunately, they didn't leave a place in the protocol for it to be added later.
Surely.
It is trivially easy to have a webserver at home: the connection speeds are now fast enough. Many home modems already support it.
There are already some examples, for instance Tim Berners-Lee's Solid.
Really: Things on the internet.
Not all IoT artefacts are stupid. These for instance allow you to control the temperature of rooms individually, and give each room its own schedule. It asks to turn off the heating when you leave.
One of the biggest dangers of IoT is exactly the same as with Web 2.0: they use their own servers.
If the company dies, your devices die with them. Example: the Pebble watch.
Or Lowes that shut down its whole home IoT platform.
The current internet is still very immature.
When books were first introduced by Gutenberg, It took 50 years before the idea caught hold that books didn't need to imitate manuscripts.
We are still feeling our way in society in how to use and deal with the internet.
The first books looked like manuscripts.
The first cars looked like carriages.
First radio was like plays, actors still had to dress up.
And the Web is (still) imitating old media.
Interlinking of services.
All information freely available.
Internet everywhere, lights, oven, your alarm clock, everything connected.
All communication via internet.
Everyone a publisher.
Nothing unavailable, nothing ever going 'out of print'.
A lot of existing information is distributed by people who have concentrations of the means of distribution, and that is the reason they exist.
Music industry is healthy, record industry is not.
Old media is struggling to retain ownership.
We are at a turning point in history.
The internet is going to have as great an effect on society as the book did, only much quicker.
We are in a turbulent period now because we are, historically seen, in the midst of a paradigm shift.
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." Roy Amara, The Institute for the Future