mardi 17 mars 2015

When will Singularity happen – and will it turn Earth into heaven or hell?

Introduction and AI considerations


Defined as the point where computers become more intelligent than humans and where human intelligence can be digitally stored, Singularity hasn't happened yet. First theorised by mathematician John von Neumann in the 1950s, the 'Singularitarian Immortalist' (and Director of Engineering at Google) Ray Kurzweil thinks that by 2045, machine intelligence will be infinitely more powerful than all human intelligence combined, and that technological development will be taken over by the machines.


"There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality," he writes in his book 'The Singularity Is Near'. But – 2045? Are we really that close?


Hardware innovation


Moore's Law states that computer processing power will double every 18 months, which is a thousand-fold increase every decade. Is Singularity really so unbelievable? What started early in the 20th century with the development of the Monroe mechanical calculator has gone on a journey via innovations like massive parallelism (the use of multiple processors or computers to perform computations) and supercomputer clusters, cloud computing, personal assistants like Siri and artificial intelligence like Watson and Deep Blue. The law of accelerating returns is in full swing.


Monroe's portable High Speed Adding Calculator


What is cognitive computing?


We're already in the era of cognitive hardware and brain-inspired architecture. IBM's latest cognitive chip, the postage stamp-sized SyNAPSE, is a new kind of computer that eschews maths and logic for more humanlike skills such as recognising images and patterns, the latter crucial for understanding human conversations.


It's powered by one million neurons, 256 million synapses and 5.4 billion transistors, and has an on-chip network of 4,096 neurosynaptic cores. It's a low-power supercomputer that only operates when it needs to and, crucially, has sensory capabilities – it's aware of its surroundings. This digital 'brain' is the latest step in artificial intelligence that could be used in robots, futuristic driverless cars, drones, digital doctors and all kinds of responsive infrastructure.


But aren't there innately human skills, such as being able to tell when someone is lying? A study last March by the University of California San Diego and the University of Toronto found that a computer can spot false faces better than people.


"The computer system managed to detect distinctive dynamic features of facial expressions that people missed," says Marian Bartlett, research professor at UC San Diego's Institute for Neural Computation and lead author of the study. "Human observers just aren't very good at telling real from faked expressions of pain."


What does Singularity have to do with artificial intelligence?


"Artificial intelligence only gets better, it never gets worse," says Dr Kevin Curran, IEEE Technical Expert and group leader for the Ambient Intelligence Research Group at University of Ulster. "Computational Intelligence techniques simply keep on becoming more accurate and faster due to giant leaps in processor speeds."


However, AI is only one piece of the jigsaw. "Artificial intelligence refers more narrowly to a branch of computer science that had its heyday in the 90s," says Sean Owen, Director of Data Science at Cloudera, who makes a distinction between game-playing, expert systems, robotics and computer vision, and machine learning, which has most of the focus today. "I do think the classic topics of AI are making a comeback, especially robotics."


Watson


"AI technologies like Siri, Watson, and Deep Blue are a step towards Singularity," says Owen. "If Singularity is about technology making qualitative change in how we live, there is evidence that some basic sci-fi has already landed – for example you can talk to your phone or car."


The Turing Test


Although it's important not to confuse AI with Singularity, it will play a decisive role in machines learning new tasks and becoming more humanlike. Cue the famous Turing Test, as set out by Alan Turing, the pioneering British computer scientist and code-breaker.


"The Turing Test is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of an actual human," says Curran. "In the original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test."


Pleiades, a distributed-memory SGI ICE X cluster


The Turing Test does not directly test whether the computer behaves intelligently, but only whether the computer behaves like a human being. "Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence," says Curran. Nothing has yet passed the Turing Test.



Just a pipe dream?


Is Singularity pure sci-fi?


"It has more of a foot in the world of sci-fi than in the present reality," says Owen, who points out that we still struggle with keeping websites from crashing. "Singularity discussions typically veer towards worries about the rise of a 'Skynet' – the AI that turned robotic armies on humanity in Terminator – themes that are definitely the province of fiction."


However, despite being a far cry from rampant AIs like Skynet and HAL 9000, there's something about the nature of hardware innovation so far that suggests that we need to know about Singularity. "The point is that, if this does happen, it will happen suddenly," says Owen. "Attempts to generate predictions of a super-intelligence in this century are therefore virtually complete speculation."


Logically, an artificial brain modelled on a human brain would have the same values. "We might reasonably expect its reactions to match ours, and therefore our values and morality," says Owen, who poses a couple more scenarios that Singularity could bring – that a 'brain' might create itself, or that humans might create something that's not like a human brain.


For some, Singularity is the arrival of an AI that is equal to the human brain. "True artificial intelligence would be a recreation of the human thought process – a man-made machine with our intellectual abilities," says Curran. "This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas … we are nowhere near achieving this level."


Fujitsu's K Computer


How long will it take before we get an artificial brain equal to the human brain?


It's unclear – and, so far, it's proving impossible for computers to get near to human-scale thinking. In August 2013 neuroscientists at the Okinawa Institute of Technology Graduate University (OIST) and Forschungszentrum Jülich in Germany used Japan's K Computer – one of the world's top supercomputers – to attempt to mimic a human brain's activity.


Using the open source NEST simulation software, they simulated a network of 1.73 billion nerve cells connected by 10.4 trillion synapses. Doing so used 82,944 processors of the K computer, and it took 40 minutes to simulate a single second's worth of neuronal network activity in a real, biological human brain. In total, the simulation used a petabyte of memory, which is about the same as 250,000 PCs. So while the Singularity isn't close, the human brain can already be simulated, and Moore's Law means that it will become much easier quite quickly.


The research was connected to a project aiming to understand the neural control of movement and the mechanism of Parkinson's disease, which raises an interesting point – the creation of an artificial brain will probably be a by-product of specific medical research, and certainly not as part of some ambition to create the end of humanity.


However, simulating a human brain and creating an artificial version are two philosophically very different things. "Achieving something like 'thinking' could be far more intractable than we suppose," says Owen. "There is an excessive fascination with making systems or algorithms that work 'like the brain' … this is neither necessary nor sufficient for an intelligence."


The Singularity – whether it comes in 2045 or much later – might sound like sci-fi, but the era of digital immortality and of human's being surpassed by machines could arrive without much warning. It's the most difficult trend to predict, but what would the post-Singularity world be like? "Heaven or hell – there's no middle ground," says Owen. "It will be completely transformational in the long term … it's either part of a transcendence or an apocalypse."


The human brain has proved to be the ultimate thinking machine for two billion years; does it really only have 30 years left at the top of the evolutionary tree? If and when it comes, Singularity may have a sweetener – digital immortality for all of us.





















from Techradar - All the latest technology news http://ift.tt/18W4ekz

0 commentaires:

Enregistrer un commentaire

Popular Posts

Recent Posts

Text Widget