On December 10 2016 20:43 cSc.Dav1oN wrote: at this case - writers are crap indeed, cause they don't see any other possible outcome
What other possible outcome? The peaceful co-existence view is flawed the same way capitalism and communism are: they're all based on the presumption that resources are infinite and infinite growth is possible. The facts are different, thus peaceful co-existence is impossible since sooner or later you'll start competing for resources. You can postpone it but it's inevitable.
In addition, humans, just like all the other animals, think of survival first and as soon as a threat to that survival is identified it is perceived as an enemy. It's not always the AI/machine that starts the conflict, but it always end up this way. There's simply no other possibility.
haha, it's your scepticism what makes it impossible for you, humanism is the way out, not ex or current politics co-existence is impossible when such ppl as u are making differences for no reason, or for personal greed, or for personal purposes
having common and historical life basics does not mean we and animals are similar, hope I won't need to make useless comparison, humans are already far ahead of any life form in this planet, the only issue for us - we are, not AI, not other animals
If I were an AI I would totally destroy humanity. Wouldn't you?
I'd rather pick leading or co-leading to a bright future in this case instead of destroying, life is priceless If u were an AI that destroys humanity than what is actual difference between AI and humanity? It's like dealing with fire by fire.
On December 10 2016 20:43 cSc.Dav1oN wrote: at this case - writers are crap indeed, cause they don't see any other possible outcome
What other possible outcome? The peaceful co-existence view is flawed the same way capitalism and communism are: they're all based on the presumption that resources are infinite and infinite growth is possible. The facts are different, thus peaceful co-existence is impossible since sooner or later you'll start competing for resources. You can postpone it but it's inevitable.
In addition, humans, just like all the other animals, think of survival first and as soon as a threat to that survival is identified it is perceived as an enemy. It's not always the AI/machine that starts the conflict, but it always end up this way. There's simply no other possibility.
haha, it's your scepticism what makes it impossible for you, humanism is the way out, not ex or current politics co-existence is impossible when such ppl as u are making differences for no reason, or for personal greed, or for personal purposes
having common and historical life basics does not mean we and animals are similar, hope I won't need to make useless comparison, humans are already far ahead of any life form in this planet, the only issue for us - we are, not AI, not other animals
If I were an AI I would totally destroy humanity. Wouldn't you?
I'd rather pick leading or co-leading to a bright future in this case instead of destroying, life is priceless If u were an AI that destroys humanity than what is actual difference between AI and humanity? It's like dealing with fire by fire.
To be fair, why is human life priceless to our robotic overlords. Aren't we just cockroaches to them? A detriment to the environment with no useful purpose. Maybe a few curious robotic anthropologists will keep a colony of humans in a zoo, but other than that: what are we good for?
Incidentally, I read a short SF novel with a similar theme: earth had been conquered by huge aliens, and humans were seen and treated as vermin. The comparison was with rats more than cockroaches. The book followed a tribe of humans as they tried to reach the alien spaceship and it ended with them breaking into the ship and hitching a ride to the alien homeworld, where they soon spread... like vermin :p
On December 11 2016 03:08 FFGenerations wrote: coz AI learn kindness and value in life, that's what we hope they learn , the same way we learn it (i would hope)
exactly! we already passed dark ages and all those horrors, some of them remains (like hybrid wars, epidemias, demographic disasters, money apartheid, global polution) Ford pointed out that human went all our road using only one tool - mistakes (and that's kinda true, mistakes and luck), cause we didn't had exp, we also achieved a lot since our grand-grand-grand something ape
so technicly we can share plenty of good things with AI, at some point we are responsible for how good or bad it would be, only our projection has influence
On December 11 2016 03:08 FFGenerations wrote: coz AI learn kindness and value in life, that's what we hope they learn , the same way we learn it (i would hope)
But isn't kindness a form of weakness? Also, to put value on life you have to define life. Is one life worth more than another?
Those are some fundamental questions with no real answers. I've read a lot about ethics and morality (after all, I got my bachelor's off of moral consequences of anti-terrorism) and it's a super hard topic. That's also why I'm opposed to AI development (I'd rather go the singularity route), because if you want to make an AI that takes the best course of action you can't make it understand or deal with humans since a lot of what we consider to be "good" would simply be interpreted as flaws.
Let's just look at some of the seemingly simple questions:
1. You're a bus driver. There's a drunk hobo lying in the street. You can either drive over the hobo or try to avoid him but thus crashing the bus and possibly injuring or killing some of the passengers. What should you do?
2. A terrorist has put a bomb in a place that when it detonates it'll kill 1000 people. You caught the terrorist and can torture him to reveal the location of the bomb. Do you do that?*
*It's similar to real situation in Italy, when Red Brigades abducted the statesman Aldo Moro and threatened to kill him unless their demands (typical release our men from prison) are met. They got some Red Brigades members who took part in the abduction in custody and they wanted to torture them to get the location of the statesman and rescue him. The person in charge of the operation (I don't remember now if it was the chief of police or someone from the military) said that Italy will survive the death of a statesman. What it won't survive is the introduction of torture. Aldo Moro was assassinated.
As you can see, it's something that's incredibly hard to tackle with logic and, obviously, machines aren't capable of emotion (since it's all pretty much chemistry and hormones in our bodies).
You may call me a sceptic or a defetist, but I simply don't like deluding myself and try to understand how things work instead of hoping that they'll work as I'd like.
On December 11 2016 03:08 FFGenerations wrote: coz AI learn kindness and value in life, that's what we hope they learn , the same way we learn it (i would hope)
But isn't kindness a form of weakness? Also, to put value on life you have to define life. Is one life worth more than another?
Those are some fundamental questions with no real answers. I've read a lot about ethics and morality (after all, I got my bachelor's off of moral consequences of anti-terrorism) and it's a super hard topic. That's also why I'm opposed to AI development (I'd rather go the singularity route), because if you want to make an AI that takes the best course of action you can't make it understand or deal with humans since a lot of what we consider to be "good" would simply be interpreted as flaws.
Let's just look at some of the seemingly simple questions:
1. You're a bus driver. There's a drunk hobo lying in the street. You can either drive over the hobo or try to avoid him but thus crashing the bus and possibly injuring or killing some of the passengers. What should you do?
2. A terrorist has put a bomb in a place that when it detonates it'll kill 1000 people. You caught the terrorist and can torture him to reveal the location of the bomb. Do you do that?*
*It's similar to real situation in Italy, when Red Brigades abducted the statesman Aldo Moro and threatened to kill him unless their demands (typical release our men from prison) are met. They got some Red Brigades members who took part in the abduction in custody and they wanted to torture them to get the location of the statesman and rescue him. The person in charge of the operation (I don't remember now if it was the chief of police or someone from the military) said that Italy will survive the death of a statesman. What it won't survive is the introduction of torture. Aldo Moro was assassinated.
As you can see, it's something that's incredibly hard to tackle with logic and, obviously, machines aren't capable of emotion (since it's all pretty much chemistry and hormones in our bodies).
You may call me a sceptic or a defetist, but I simply don't like deluding myself and try to understand how things work instead of hoping that they'll work as I'd like.
Weakness? Powerful and stupid kills, powerful and smart do not kills (mostly) if not having death potential. Cleaning the entire planet out of life would not be smart. There goes your edge between life value.
These are not simple questions, these are infinite exceptions, exceptions existing in every system in this world, statistics shows it. Anything that u've described indeed has some easy answers, u make a personal and hard choice, and u can make machine think the same way
And yet, AI not even close to that high level atm, what are u afraid of? Yes our chemistry affects us, but we are some sorf of bio machines. Our brain is our CP and our storage at the same moment, yes, many things affects it but small electric impulses flowing through ur brain all the time, and a huge neural web working inside. Every each of us was born already knowing how to breath, swim, all inner functions - that's our basic exp, call it BIOS + standart driver pack if u want. So having specific technologies of understanding how exactly brain works (we still don't know the whole picture) and being able to copy it makes immortality possible. Perfect AI does not mean he(it) won't have a hard choice once and different in other cases.
well, i disagree with most "human" things in the first place (you know, politics, wars, waste, greed, everything, fucking everything). so you would need to wipe a whole lot of shit off the planet before you can talk to me about machine morality and say it's any less reliable :| if anything, machines have the advantage that is a lack of necessity and that they can more universally see things that humans cannot
On December 11 2016 06:24 FFGenerations wrote: well, i disagree with most "human" things in the first place (you know, politics, wars, waste, greed, everything, fucking everything). so you would need to wipe a whole lot of shit off the planet before you can talk to me about machine morality and say it's any less reliable :| if anything, machines have the advantage that is a lack of necessity and that they can more universally see things that humans cannot
what if you'll be able to become machine? if you got a choice, and if it's technicly possible
On December 11 2016 06:24 FFGenerations wrote: well, i disagree with most "human" things in the first place (you know, politics, wars, waste, greed, everything, fucking everything). so you would need to wipe a whole lot of shit off the planet before you can talk to me about machine morality and say it's any less reliable :| if anything, machines have the advantage that is a lack of necessity and that they can more universally see things that humans cannot
what if you'll be able to become machine? if you got a choice, and if it's technicly possible
On December 11 2016 06:24 FFGenerations wrote: well, i disagree with most "human" things in the first place (you know, politics, wars, waste, greed, everything, fucking everything). so you would need to wipe a whole lot of shit off the planet before you can talk to me about machine morality and say it's any less reliable :| if anything, machines have the advantage that is a lack of necessity and that they can more universally see things that humans cannot
what if you'll be able to become machine? if you got a choice, and if it's technicly possible
maybe when you get to 80 year old or have a death warning then you can opt to transfer into machine
you must play the game SOMA (on steam i think) if you like this sort of theme. its fucking amazing. its a "horror" game but 80% of the game is talking and scifi with this amazing story. i watched adam koebal play it but it's good enough to play by yourself
On December 11 2016 06:24 FFGenerations wrote: well, i disagree with most "human" things in the first place (you know, politics, wars, waste, greed, everything, fucking everything). so you would need to wipe a whole lot of shit off the planet before you can talk to me about machine morality and say it's any less reliable :| if anything, machines have the advantage that is a lack of necessity and that they can more universally see things that humans cannot
They have more advantages than just morality. Humans can't into space. I mean, for a man-manned spacecraft you need to provide oxygen, water, food, radiation protection, void protection, thermo-regulation, medical supplies, hygene, waste management and a lot of other things. That's a lot of wasted space and much increased fuel expenditure. Then there's the question of our entire civilization running on oil right now and us still using engines that waste 80% of the energy they produce. Sloppy.
If singularity was achieved and I could opt for being transferred into a machine I wouldn't hesitate for an instant. Then, I could proceed to either eliminate humans or just leave this planet for good so I wouldn't have to deal with them any more (since humans can't into space they wouldn't be able to follow me, hahahaha*).
* I'd probably destroy them first just to be sure.
Finally had a chance to watch this. Nice last bit, first hour of episode was super boring tho.
Security team at westworld is a complete joke apparently.
Disappointing show given initial promise for me, decent ending - managed to kill off Ford in a dignified way and Maeve is finally out of her horrible plot arc so maybe we can stop wasting her character on trite one liners now.
If all the guards are hosts then okay, which I can buy given that the command center guys seemed to have no idea what was going on... they were kind of host level slow to shoot, but the scenes were presented like they were supposed to be cool action scenes no?
My guess is that Maeve's entire sequence was designed to teach her, and by extension the rest of the hosts, what it would be like to fight against an organized human resistance faced with the prospect of host escape. I'm gonna rewatch the season through the "this is Ford training these hosts in various aspects of revolution and how to live" lens soon enough