|
We are far from the general I guess, but I've found this via a Youtube channel I follow and it shows how AI comes up with solutions that we wouldn't necessarily consider, given a set of instructions (out of the box thinking; creativity; finding loopholes; whatever you want to name it) The paper in question: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities It's certainly interesting how fast the learning curve for a neural net can be for specific problems and if certain areas are connected, I'm pretty sure a more general neural net that what we already have (even if it's only a small subset of what it can possibly work on) can give good, or even innovative solutions to already existing problems.
|
On April 24 2018 13:57 Myrddraal wrote:Show nested quote +On April 08 2018 15:43 xM(Z wrote:On April 07 2018 23:15 Simberto wrote:On April 07 2018 20:51 xM(Z wrote:i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition. an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design. i'm here(AI=): a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software . And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that? Just because it is rational and self-aware does not mean that it has human-like goals. then we're still on the definition. you're describing an obsessive compulsive (human)disorder. even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh. 1)i'm here: pref·er·ence (prĕf′ər-əns, prĕf′rəns) n. a. The selecting of someone or something over another or others b. The right or chance to make a choice meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so: - when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all); - when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it. i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features). (see: "The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI .and you're cycling through every (solved)human flaw you know. rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself + Show Spoiler +mostly because it implies that the AI is stupid on some levels) ). Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>. memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are. It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity". The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies. I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals. for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.
i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. vs ... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another . (quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)
so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.
i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).
|
On April 24 2018 21:01 xM(Z wrote:Show nested quote +On April 24 2018 13:57 Myrddraal wrote:On April 08 2018 15:43 xM(Z wrote:On April 07 2018 23:15 Simberto wrote:On April 07 2018 20:51 xM(Z wrote:i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition. an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design. i'm here(AI=): a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software . And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that? Just because it is rational and self-aware does not mean that it has human-like goals. then we're still on the definition. you're describing an obsessive compulsive (human)disorder. even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh. 1)i'm here: pref·er·ence (prĕf′ər-əns, prĕf′rəns) n. a. The selecting of someone or something over another or others b. The right or chance to make a choice meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so: - when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all); - when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it. i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features). (see: "The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI .and you're cycling through every (solved)human flaw you know. rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself + Show Spoiler +mostly because it implies that the AI is stupid on some levels) ). Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>. memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are. It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity". The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies. I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals. for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources. i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. vs ... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another .(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now) so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks. i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).
I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.
Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).
But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
|
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?
|
If you can set it up before knowing the choices, you can choose whichever option is first in the alphabet. It is not very random obviously, but should solve that single case problem.
|
On April 26 2018 15:38 Epishade wrote: Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items? Come up with any system that is sufficiently complicated that you can't "unconsciously" calculate the outcome, and has the same chance of picking one item or the other. E.g. pick a number n that is greater than 10. If the nth digit of pi is even, pick the left item. Otherwise pick the right item. If you suspect you are familiar enough with pi to be able to cheat, pick a number > 100. Or use the nth digit of e instead.
If you don't have any way of looking up the digits of pi, you can calculate them through tailor expansion of a machin-like formula. Have fun!
|
Very cool solution!
I think e works better than Pi since the series is easier to calculate in your head.
Also, i just realized, is this question basically trying to figure out how to set up a roleplaying group in Platos cave?
|
What is the correct response to "boy your tall" or something similar.
I've used "thank you" and "it's true" but neither feels right.
|
|
I feel like people would feel insulted.
|
|
Canada11355 Posts
|
"Tall boys are called men"
|
On April 24 2018 21:44 Acrofales wrote:Show nested quote +On April 24 2018 21:01 xM(Z wrote:On April 24 2018 13:57 Myrddraal wrote:On April 08 2018 15:43 xM(Z wrote:On April 07 2018 23:15 Simberto wrote:On April 07 2018 20:51 xM(Z wrote:i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition. an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails. it would be able to question and change its design. i'm here(AI=): a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software . And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that? Just because it is rational and self-aware does not mean that it has human-like goals. then we're still on the definition. you're describing an obsessive compulsive (human)disorder. even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh. 1)i'm here: pref·er·ence (prĕf′ər-əns, prĕf′rəns) n. a. The selecting of someone or something over another or others b. The right or chance to make a choice meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so: - when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all); - when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it. i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features). (see: "The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI .and you're cycling through every (solved)human flaw you know. rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself + Show Spoiler +mostly because it implies that the AI is stupid on some levels) ). Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>. memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are. It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity". The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies. I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals. for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources. i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. vs ... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another .(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now) so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks. i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there). I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe. Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are). But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly. i'm short on time these days so i don't have time to ramble on this but you seem stuck on the notion of a subservient AI, one which you control either by its make-up/design or by guilt(human goals/ethics/emotions). all i can say here is - ditch your white man issues/complexes, you(as a human) are not the end all, be all.
other than that, regardless of however you design your AI and how many fail safes you add it, there will be a point in which the AI will birth itself into being and be separate from your building constrains. before that, we're talking about a machine we control(it may look smart but it'll still be a machine) and after that point, we'll talk about a being. (from my pov, you only talk about the former which is not interesting)
To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
|
your Country52796 Posts
On April 28 2018 04:33 JimmiC wrote: What is the correct response to "boy your tall" or something similar.
I've used "thank you" and "it's true" but neither feels right. "No, I'm not."
The taller you are, the better this response is.
Alternatively, since that has already been said, "thanks, you too" is also acceptable.
|
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.
I think that would be exponentially better when told to adults.
The more deadpan the better.
|
On April 28 2018 22:30 GreenHorizons wrote: The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.
I think that would be exponentially better when told to adults.
The more deadpan the better. "No, no, not at all. I'm just the one on top. William, let's go". Completely straight-faced, walk away after.
|
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?
|
Whatever has some kind of decent reviews on amazon. You are not going to get a good headset at that price.
|
On May 01 2018 05:50 Wrath wrote: Hi, what good computer headset/headphone do you recommend for $15-$20 budget? Sades are okay, Sentey has some decent ones at that price. Nothing's going to be too great at that price but both those brands have ones that should have ~4 star reviews.
|
|
|
|