The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Consciousness is just a consequence of sensory inputs, neuron anatomy, neuronal wiring, peptide signalling, synaptic plasticity, hormonal states and neuronal integration. It is not a mechanism per se.
Getting this right depends on literally billions of molecular parameters that are the result of evolution. Each nerve cell has its unique composition and identity due to transcription factors that define the expression patterns of genes. Each nerve cell is connected to 10-20 other nerve cells via 1000s of synapses. Each. Single. One. We have 100 billion nerve cells. That's 1000 trillion synaptic connections.
Each synaptic connection has its own meaning as a result of millions of years of evolution. And the whole thing is highly adaptive: synaptic connections can become stronger and weaker, even whole new neurons can form in special regions.
It is quite clear that the von Neumann architecture cannot even begin to capture this complexity. To build a computer like our brain, we would have to first understand our brain entirely. However, that is not going to happen within the next 100 years.
There are attempts to mimic the architecture of our brain (see IBM's brain-like chip), however these are really nothing in comparison to what's really going on.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Some sort of military robot turning against it's owner is quite the stretch to technological singularity.
On March 25 2015 18:23 excitedBear wrote: To build a computer like our brain, we would have to first understand our brain entirely.
But in theory if you were able to scan the entire brain and simulate it within a computer program you would only have to understand how it works generally, rather than precisely what every single synapse connection means. Sure, there could be other unforseen complications so obviously nothing is for sure, but reverse engineering the brain can come after.
Of course the scanning equipment and computer power aren't there yet. But they're working on it.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Some sort of military robot turning against it's owner is quite the stretch to technological singularity.
Oh no, I was saying provided we avoid WW III that they have a better chance of turning against us with a "dumb" AI before they were "smart like us".
The emphasis being on AI not needing to be 'like' our brains to practically perform adequately to be a significant threat.
I'm not convinced the way our brain functions is the 'best' way to think either. So I'm not sure AI has to think like we do for it to be practically more intelligent in many ways. Ant's aren't extremely intelligent but they can get quite a bit done and will probably be here after we're gone.