The best symbiosis of man and computer is where a program learns from humans but notices things they would not
Global consciousness. We’ve heard that before. In the 1960s we were all going to be mystically connected; or it would come as a super-intelligent machine – Terminator’s Skynet – that is inimical to humanity. And yet, what if the reality is more mundane?
Computer scientist Danny Hillis once remarked, “Global consciousness is that thing responsible for deciding that pots containing decaffeinated coffee should be orange.” And of course, the mechanism by which the Sanka brand colour became a near-universal symbol for decaffeinated coffee in the US is exactly the same one by which hundreds of millions of people have a shared knowledge of Lady Gaga, Newton, Einstein and Darwin, and, for that matter, of many things both true and untrue.
What is different today, though, is the speed with which knowledge propagates. News, entertainment and opinions spread through social networks, websites and search engines in a process increasingly close to real-time. Those things that rise to the top are decided not by media executives but by their viral momentum.
One might say that this is the same underlying mechanism of human knowledge capture and retransmission that has always driven the advance of civilisation. But just as the spread of literacy and the printed book led us into the modern era, the even greater capability for knowledge transmission and recall using electronic networks is propelling us towards a very different future.
The web is a perfect example of what engineer and early computer scientist Vannevar Bush called “intelligence augmentation” by computers, in his 1945 article “As We May Think” in The Atlantic. He described a future in which human ability to follow an associative knowledge trail would be enabled by a device he called “the memex”. This would improve on human memory in the precision of its recall. Google is today’s ultimate memex.
The web also demonstrates what JCR Licklider, another early computer visionary, called “man-machine symbiosis”. Humans create the documents that make up the web and provide the associative links between them. Search engines follow our breadcrumb trail, evaluate the strongest paths, and lead others to what has been found. When the algorithms for finding the “right” documents improve, we all get smarter; when spammers or other malware lead the algorithms astray, we all get dumber.
Man-machine symbiosis isn’t just about knowledge retrieval, it’s also about knowledge creation. Our computers have no intelligence without us, but they accelerate our collective intelligence at a speed that has never been seen before.
When the web goes mobile, even more interesting things start to happen. A human with a smartphone can literally see around corners and through time. What’s more, our phones are eyes and ears for what is starting to look increasingly like a global brain. Photos are automatically uploaded to vast cloud databases, each one tagged with its location and the time it was taken. Applications like Shazam can listen to a song and tell you who is singing it. The ambient sound of a room can be used to pinpoint your location.
To understand where the combination of mobile sensors, cloud databases and computer algorithms augmented by human action is leading us, consider the self-driving car. Stanley, a driverless vehicle, won the US Darpa (Defense Advanced Research Projects Agency) grand challenge in 2005 by navigating a course of slightly over seven miles in a little under seven hours. Last year, Google demonstrated an autonomous vehicle that has driven over 100,000 miles in ordinary traffic. The difference: Stanley used traditional artificial intelligence algorithms and techniques; the Google autonomous vehicle is augmented with the memory of millions of road miles put in by human drivers building the Google Street View database. Those cars recorded countless details – the location of stop signs, obstacles, even the road surface.
This is man-computer symbiosis at its best, where the computer program learns from the activity of human teachers, and its sensors notice and remember things the humans themselves would not. This is the future: massive amounts of data created by people, stored in cloud applications that use smart algorithms to extract meaning from it, feeding back results to those people on mobile devices, gradually giving way to applications that emulate what they have learned from the feedback loops between those people and their devices.
In the best case, we see a creative symbiosis of man and machine. However, it’s easy to get the balance wrong: we have only to look at the financial market excesses of the past decade to see the danger of algorithms gone wild in the hands of rogue companies and individuals seeking only their own advantage.
The global brain is still in its infancy. We can raise it to help us make a better world, or we can raise it to be selfish, unjust and short-term in its outlook.
Author and designer Edwin Schlossberg once said, “The skill of writing is to create a context in which other people can think.” This is a good time to think hard about the future. It’s increasingly in the hands of computers that magnify the effectiveness – and the choices – of those who use them; the great challenge of the 21st century will be to teach them the difference between right and wrong.
Tim O’Reilly is the founder and CEO of O’Reilly Media