On April 08 2018 07:59 Acrofales wrote: I somehow unsubscribed to this thread and missed the AI discussion. It was... enlightening.
Also, stop getting your ideas about AI from Wargames and I Robot. Please.
@GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching.
Not sure what part you're talking about but this is kinda what I'm talking about.
AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.
the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules.
@GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
On April 07 2018 15:03 xM(Z wrote: whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here. or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.
Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus). If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.
in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.
so now, will we be considered threats?; why?.
Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'
thought about it but then narrowed it down to: what would be the least amount of code required for the AI to preserve then rewrite itself?; that is a guesstimate at this point but there are glass chips made in Japan that can store data forever(~40MgB to ~50MgB if i recall correctly(to not mention the quartz thingies as of recently)) so technically the AI would never die. (Ex: if humankind gets wiped from the face of the earth, one(an alien) could get our DNA and start cloning humans again; i.e., AI rewriting itself (here, dwelling on that needed outside help only goes into statistics and it's irrelevant to the point which is the possibility of it not the act of achieving it(the rewriting/resurrection of (it)self.)
for the other point, i figured the new AI would see the old AI as a part of itself; when the two AIs would 'meet' i'm assuming one will incorporate the other then more forward as one. the analogy: most humans regardless of their smarts can be seen as a resource so the smarter AI will see the outdated one as a resource too.
(note: thing is, this: it is not unreasonable to think we would develop a superior program that would replace it is you unwilling or unable to relinquish control of ... i don't know, life as you know it. i took it at face value(it can happen) but i can not see it as possible and put the whole thing on you being an issue riddled human.
On April 07 2018 20:51 xM(Z wrote: i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition. an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design.
i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.
And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?
Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder. even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh. 1)i'm here: pref·er·ence (prĕf′ər-əns, prĕf′rəns) n. a. The selecting of someone or something over another or others b. The right or chance to make a choice meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so: - when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all); - when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.
i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).
(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet
A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know. rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).
Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>. memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
E: as for AI being applied to creative, scientific and engineering tasks, the answer is yes, that is already being done. There are automated scientific labs (albeit in infant stage), there are AI plugins for AutoCAD, and there is a whole burgeoning field of AI art.
AI Lab: https://www.scientificamerican.com/article/robots-adam-and-eve-ai/ (this was 2009. I talked to Ross King in 2010, when Adam had just published its original research into yeast proteins in a premier biology journal. Not sure how far along Eve is now, I can probably find something newer for you if you want)
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
I think there's some confusion about what's being discussed.
As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.
Not really trying to argue though, just have a little thought experiment fun so I'll let it go.
EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.
Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
I think there's some confusion about what's being discussed.
As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.
Not really trying to argue though, just have a little thought experiment fun so I'll let it go.
EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.
Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.
Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
I think there's some confusion about what's being discussed.
As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.
Not really trying to argue though, just have a little thought experiment fun so I'll let it go.
EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.
Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.
Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.
I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk. _______________________________________________________________________________
Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?
On April 08 2018 09:02 Jockmcplop wrote: @GH you might be interested in Roko's Basilisk. Its a bizarre meme type thing that happened on a forum (I can't remember which one).
Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
I think there's some confusion about what's being discussed.
As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.
Not really trying to argue though, just have a little thought experiment fun so I'll let it go.
EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.
Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.
Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.
I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk. _______________________________________________________________________________
Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?
On April 08 2018 09:26 GreenHorizons wrote: [quote]
I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.
I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.
It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.
In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).
As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
It seems that AI has outpaced your expectations.
The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.
Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.
They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).
That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").
You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.
That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.
And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...
Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.
It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone
Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.
I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?
EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.
I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*
Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.
But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).
I think there's some confusion about what's being discussed.
As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.
Not really trying to argue though, just have a little thought experiment fun so I'll let it go.
EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.
Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.
Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.
I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk. _______________________________________________________________________________
Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?
@Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality". while those seem to have nothing to do with each other all you need is some proper grouping /associations: - where we are now = actual reality; - the matrix = simulated reality; (you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point). now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process: - we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);
- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.
tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes. conceptualize a God that other than chance/randomness, has nothing to do with you.
BUT, the silver lining of this exercise, besides establishing that hippies were right, is seeing how the fearful white man/culture (instinctively)thinks of AI as being another round of slaves; boooo. + Show Spoiler +
or, OR, (better still)you can evaluate/asses(clinically) human personalities based on replies/believes: if person X has Y(personality trait) then he will apply Y to all other arguments he engages in/with. when Y is inconsistent within arguments, X is fixable. you+ Show Spoiler +
me, i can
can fix inconsistencies in human beings; they desire to be fixed, be it consciously or unconsciously. i can see/read the human code based on its expression/manifestation/interaction with the environment, with the context.
On April 11 2018 18:54 xM(Z wrote: @Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality". while those seem to have nothing to do with each other all you need is some proper grouping /associations: - where we are now = actual reality; - the matrix = simulated reality; (you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point). now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process: - we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);
- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.
tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes. conceptualize a God that other than chance/randomness, has nothing to do with you.
Those were answers to two different questions that you are conflating, but regarding your overarching point, the answer is still: so what? If we have no access to some outside reality (unlike in the movies, there are no deja vus, pills or Neos; the simulation is perfect and we are trapped inside: in fact, we are just bits running through a program in a supercomputer in another universe), then it doesn't really matter *to us* whether we attempt to discover the Grand Unifying Theory of Everything, or the Grand Unifying Logic of our Simulation, as they are one and the same. We will *never* have access to the outside perspective. It is therefore not a question of science, but of faith (and as such, thoroughly uninteresting to me: just as I reject the existence of God because there doesn't seem to be any evidence for his existence, I reject this digital reincarnation of God for the exact same reasons). Until someone thinks up an experiment that would distinguish between a "real" reality (whatever the fuck that even is... you see the problem here?) and a "simulated" reality, the difference is entirely in the domain of theology.
As for certainty vs. impossibility, I am talking about that underlying theology. You can consider it a bit like Pascal's wager: he came up with a mathematical "proof" for why you should believe in God: the problem isn't in the proof, it's in the underlying assumptions. Similarly, while the mathematical "proof" for why we are living in a simulation is different (and more interesting) than Pascal's wager, it *also* depends on assumptions which you may choose to believe, or not (and being quantitative, you can simply change the numbers there) leading to the different outcomes ranging from "absolutely certain" to "completely impossible". And unfortunately, we only have one perspective here: our own. What you appear to be advocating is to say that "for God, it's easy to see he exists". Well yes, but that isn't what we're arguing about now, is it?
Anybody know how to stop those redirect "congratulations" ads on a Google pixel using chrome? I get them here and on a few other sites and I don't want to have to leave JavaScript disabled.
On April 11 2018 18:54 xM(Z wrote: @Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality". while those seem to have nothing to do with each other all you need is some proper grouping /associations: - where we are now = actual reality; - the matrix = simulated reality; (you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point). now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process: - we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);
- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.
tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes. conceptualize a God that other than chance/randomness, has nothing to do with you.
Those were answers to two different questions that you are conflating, but regarding your overarching point, the answer is still: so what? If we have no access to some outside reality (unlike in the movies, there are no deja vus, pills or Neos; the simulation is perfect and we are trapped inside: in fact, we are just bits running through a program in a supercomputer in another universe), then it doesn't really matter *to us* whether we attempt to discover the Grand Unifying Theory of Everything, or the Grand Unifying Logic of our Simulation, as they are one and the same. We will *never* have access to the outside perspective. It is therefore not a question of science, but of faith (and as such, thoroughly uninteresting to me: just as I reject the existence of God because there doesn't seem to be any evidence for his existence, I reject this digital reincarnation of God for the exact same reasons). Until someone thinks up an experiment that would distinguish between a "real" reality (whatever the fuck that even is... you see the problem here?) and a "simulated" reality, the difference is entirely in the domain of theology.
As for certainty vs. impossibility, I am talking about that underlying theology. You can consider it a bit like Pascal's wager: he came up with a mathematical "proof" for why you should believe in God: the problem isn't in the proof, it's in the underlying assumptions. Similarly, while the mathematical "proof" for why we are living in a simulation is different (and more interesting) than Pascal's wager, it *also* depends on assumptions which you may choose to believe, or not (and being quantitative, you can simply change the numbers there) leading to the different outcomes ranging from "absolutely certain" to "completely impossible". And unfortunately, we only have one perspective here: our own. What you appear to be advocating is to say that "for God, it's easy to see he exists". Well yes, but that isn't what we're arguing about now, is it?
You being xMZ I have probably completely misunderstood what you're trying to say, and will similarly be misrepresented when you reply, but carry on
PAINT TIME! i'll go of what you said there and try to get a (visual)base for the argument: you see this in two ways: "so what" and "God and/or Simulation". -for 'so what' + Show Spoiler +
the ovals are uncrossable hard boundaries; in 'so what' there are no means of communication between the realms and in God/Sim, the purple arrow shows that the Gods/Simulators can and do exert pressure upon the plebs(God made us, the Matrix keeps the flesh alive .. etc). is that a fair picture of the main(only) two stances you have there?.
is there a scenario in which you envision a pleb and a God that know of each other but one is indifferent(for the lack of a better word) to the other and viceversa?. (something along the lines of: there's nothing that a God would do to a pleb that would improve its own situation(the reverse would be also true)so he just doesn't care/give a fuck about plebs).
also, Pascal's wager is a scam; it was coined to apply only to religious/fearful of a potential God humans. in its premise it ponders about the existence of a God; that should be the end of the line for any unconstrained conclusions/revelations coming from there. if there is a God and you know about it, you should give it the finger, it would not care(and that's assuming he knows what the finger is).