Ask and answer stupid questions here! - Page 690
Forum Index > General Forum |
GreenHorizons
United States21792 Posts
| ||
Simberto
Germany11032 Posts
On April 07 2018 06:07 GreenHorizons wrote: How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation? I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive. An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence. With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys. | ||
GreenHorizons
United States21792 Posts
On April 07 2018 07:30 Simberto wrote: I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive. An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence. With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys. I'm thinking more like Animatrix, but this AI saw that movie. It's undecided on what it's going to do with humanity and just building up to a point where we could do nothing to stop whatever it chooses. An alternative is that we are already in a simulation run by an AI for a reason we don't understand fully. But I think this response answers an underlying question of if we create a self-aware AI, we probably won't know it until were dead, slaves, or reach nirvana. | ||
Simberto
Germany11032 Posts
Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those. I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good. My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people. | ||
Fecalfeast
Canada11355 Posts
| ||
GreenHorizons
United States21792 Posts
On April 07 2018 07:58 Simberto wrote: It highly depends on how smart the AI is, and how accidentally we built it. Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those. I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good. My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people. Maybe I'm mixing some deep youtube late night sessions up, but hasn't connecting certain AI's directly to the public internet been avoided in some cases because of the fear that even a rudimentary AI could learn exponentially given the time and resources? iirc a popular theory on AI is that if it passed some basic hurdles it could/would learn at a rate we can't really comprehend. Certainly it would still make mistakes, but it would learn quickly from them and establish a protocols to handle them. Where better for an AI to hide while it learns than the internet. It could 'copy and distribute' itself around the world and learn from every digital interaction, video feed, etc... It could even try to imitate us or rather lots of us's. | ||
Uldridge
Belgium4254 Posts
Edit: I guess this expands and/or reiterates your point.. How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence. At these timeframes we'll be able to assess if it's becoming hostile or not I think. | ||
GreenHorizons
United States21792 Posts
On April 07 2018 10:38 Uldridge wrote: There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet. Edit: I guess this expands and/or reiterates your point.. How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence. At these timeframes we'll be able to assess if it's becoming hostile or not I think. I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources. Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI. | ||
xM(Z
Romania5257 Posts
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins. | ||
Gorsameth
Netherlands20760 Posts
On April 07 2018 15:03 xM(Z wrote: whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here. or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins. Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus). If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity. | ||
xM(Z
Romania5257 Posts
On April 07 2018 15:35 Gorsameth wrote: this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus). If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity. in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler + i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki so now, will we be considered threats?; why?. Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?. | ||
Archeon
3235 Posts
On April 07 2018 15:35 Gorsameth wrote: Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus). If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity. I'd argue that the question of self preservation depends more on how it approaches theoretical scenarios. Destruction would oppose the goal any deep learning AI is trying to achieve, so it's logical to be self-preservative if they can calculate scenarios where they would be destroyed. It doesn't need the directive, it needs to understand the threat. But 'sentient' in a human way is pretty much the opposite of what an AI is. | ||
Simberto
Germany11032 Posts
On April 07 2018 18:17 xM(Z wrote: this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there. in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler + i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki so now, will we be considered threats?; why?. Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?. The problem with all of this is that you assume an AI is human. It is almost certainly not. It is fundamentally alien. Humanities many evolved social standards are simply not part of its mind. Let's assume the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation. From that motivation follow some goals: 1 Continue existing to manufacture more paperclips. 2 Acquire resources to make paperclips. 3 Build more paperclip factories. 4 Optimize paper clip production in those factories. And anything related to humans comes after that. In fact, it will recognize humans as a threat to its prime directive, because humans will resist everything being turned into paperclips. Evolved stuff like compassion is simply not a part of this AIs mind unless someone programmed it in there. Regarding the spreading over the internet: Stupid viruses spread over the internet. I doubt an AI couldn't find some systems to get into. And even if not, it simply needs to win at online poker and buy servertime somewhere. | ||
xM(Z
Romania5257 Posts
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design. i'm here(AI=): a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software. and you're here(AI=): For Kant, practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences. the AI wont have duties... overall, i'd put your argument under a somewhat modified "aggrieved entitlement" issue: "it is the existential state of fear about having my ‘rightful place’ as a (hu)man questioned … challenged … deconstructed". the most pertinent thing on this page is: On April 07 2018 06:07 GreenHorizons wrote: thing i'd put somewhere between possible and probable. How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation? the AI doesn't need to be sentient nor human; it can work 100% on practicalities. | ||
Gorsameth
Netherlands20760 Posts
On April 07 2018 18:17 xM(Z wrote: Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there. in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler + i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki so now, will we be considered threats?; why?. Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?. | ||
Simberto
Germany11032 Posts
On April 07 2018 20:51 xM(Z wrote: i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition. an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design. i'm here(AI=): And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that? Just because it is rational and self-aware does not mean that it has human-like goals. | ||
Liquid`Drone
Norway28264 Posts
On April 07 2018 10:55 GreenHorizons wrote: I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources. Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI. creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with. | ||
Gorsameth
Netherlands20760 Posts
On April 08 2018 03:10 Liquid`Drone wrote: That is actually pretty genius :pcreating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with. | ||
Acrofales
Spain17187 Posts
Also, stop getting your ideas about AI from Wargames and I Robot. Please. @GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching. | ||
Acrofales
Spain17187 Posts
On April 07 2018 22:57 Gorsameth wrote: Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death' Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing. He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies. Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev." Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn." "Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer." He turned to face the machine. "Is there a God?" The mighty voice answered without hesitation, without the clicking of a single relay. "Yes, now there is a God." Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut. ("Answer" by Fredric Brown, 1954) | ||
| ||