Korea wants robotic prison guards - Page 2 - Tree of Souls - An Avatar Community Forum
Tree of Souls - An Avatar Community Forum
Tree of Souls has now been upgraded to an all-new forum platform and will be temporarily located at tree-of-souls.net. This version of the forum will remain for archival reasons, but is locked for further posting. All existing accounts and posts have been moved over to the new site, so please go to tree-of-souls.net and log in with your regular credentials!
Go Back   Tree of Souls - An Avatar Community Forum » General Forums » Science and Technology
FAQ Community Calendar

Reply
 
Thread Tools Display Modes
  #16  
Old 11-27-2011, 11:13 PM
tm20's Avatar
tm20 tm20 is offline
Olo'eyktan
 
Join Date: May 2010
Posts: 2,745
Default

not necesary at all. just implement this

__________________
There are many dangers on Pandora, and one of the subtlest is that you may come to love it too much.

Reply With Quote
  #17  
Old 11-28-2011, 02:25 AM
applejuice's Avatar
applejuice applejuice is offline
Taronyu
 
Join Date: Dec 2010
Location: In the end of the world
Posts: 363
Default

I thought the robots' AI was going to be a more advanced than just surveillance. Bribes and so might be more difficult with these guys around, but as long as the human factor be present in the operation of the robot, corruption will always emerge eventually. The only way to have an incorruptible policeman is to design a robot/human programmed to obey the law... no exceptions. But, would we want an entity that executes the Law no matter the cost?
__________________
Reply With Quote
  #18  
Old 11-28-2011, 04:54 AM
Moco Loco's Avatar
Moco Loco Moco Loco is offline
Dandy Lion
 
Join Date: Jun 2011
Location: New Orleans
Posts: 2,912
Send a message via Skype™ to Moco Loco
Default

Yay Korea I hope they get their "guards", I'd like to see how it turns out.
__________________

Reply With Quote
  #19  
Old 11-28-2011, 01:20 PM
auroraglacialis's Avatar
auroraglacialis auroraglacialis is offline
Tsulfätu
 
Join Date: Apr 2010
Location: Central Europe
Posts: 1,610
Default

Quote:
Originally Posted by applejuice View Post
The only way to have an incorruptible policeman is to design a robot/human programmed to obey the law... no exceptions. But, would we want an entity that executes the Law no matter the cost?
Human programmed to obey the law ... shudder! You should hear what you are saying.

Anyways I think there is a problem with AIs. A hope would be that they re rational, incorruptible and follow the rules 100%. The way it looks now, AIs are less likely to come from programming a set of rules and responses into a computer, but rather from creating learning machines, essentially neuronal networks that in a way mimic biological brains. There is to my knowledge no inherent reason why this would produce any of these desired outcomes. This has been dealt with in many Sci Fi novels. You run into two main problems there - the possibility of self-awareness of AIs at which point it becomes a philosophical/ethical problem and the unpredictability of behaviour of such systems - e.g. how would you ensure that an AI that was created by a learning process will obey the rules any more than you can ensure that for a human being. Even if you take Asimovs laws of robotics and somehow enforce them in such an AI, that as well can lead to undesired outcomes.

As usually thei thread deviates a lot from the OT, which was about the use of robotic prison guards and the potential of semi-autonomous robots like this to be used in ways that harm people. They are used as soldiers already, but I dont want to see them in prisons, schools, retirement homes or on the streets fighting uprisings or protests.
__________________
Know your idols: Who said "Hitler killed five million Jews. It is the greatest crime of our time. But the Jews should have offered themselves to the butcher's knife. They should have thrown themselves into the sea from cliffs.". (Solution: "Mahatma" Ghandi)

Stop terraforming Earth (wordpress)

"Humans are storytellers. These stories then can become our reality. Only when we loose ourselves in the stories they have the power to control us. Our culture got lost in the wrong story, a story of death and defeat, of opression and control, of separation and competition. We need a new story!"
Reply With Quote
  #20  
Old 11-28-2011, 03:35 PM
Aquaplant Aquaplant is offline
Tsamsiyu
 
Join Date: Mar 2010
Posts: 690
Default

Quote:
Originally Posted by auroraglacialis View Post
Human programmed to obey the law ... shudder! You should hear what you are saying.
Well aren't we already kind of like that? I mean we are brought up from early childhood to obey our parents, teachers, elders and what have you. We are, and always will be rebellious though, but much of our society is based on obedience. Our need to belong leads to compromises in our personal wants and desires, so we are never really free in the true sense of the word.

Isn't it almost equally unethical to be programmed to obey the law, as it is to be socially forced to a certain set of rules? I mean the latter is not as blatant, but it's the insidious nature of such indirect manipulation that makes it almost as bad as your average dictatorship.

Quote:
Anyways I think there is a problem with AIs. A hope would be that they re rational, incorruptible and follow the rules 100%. The way it looks now, AIs are less likely to come from programming a set of rules and responses into a computer, but rather from creating learning machines, essentially neuronal networks that in a way mimic biological brains. There is to my knowledge no inherent reason why this would produce any of these desired outcomes. This has been dealt with in many Sci Fi novels. You run into two main problems there - the possibility of self-awareness of AIs at which point it becomes a philosophical/ethical problem and the unpredictability of behaviour of such systems - e.g. how would you ensure that an AI that was created by a learning process will obey the rules any more than you can ensure that for a human being. Even if you take Asimovs laws of robotics and somehow enforce them in such an AI, that as well can lead to undesired outcomes.
Wouldn't it be nice if for a change we could have a story of humanity creating artificial intelligence that we would have fun with instead of abusing it, and then end up fighting against it?

Ethical problems usually arise from the treatment of life, not the creation of life itself, because that would mean that having babies is unethical, because none of those babies ever asked to be born. Does that even make sense?

Quote:
As usually thei thread deviates a lot from the OT, which was about the use of robotic prison guards and the potential of semi-autonomous robots like this to be used in ways that harm people. They are used as soldiers already, but I dont want to see them in prisons, schools, retirement homes or on the streets fighting uprisings or protests.
We have a tendency to delegate unpleasant tasks of all kinds to machines or automation. But I guess that being a prison guard to begin with takes a sort of special kind of damaged personality, because healthy people can't stand watching the kinds of wrongs that presumably happen in prisons. Then again we are an adaptive survival species, and if one can get by in life by being a prison guard, then one will slowly but surely adjust their mental faculties appropriately, or face insanity.
Reply With Quote
  #21  
Old 11-28-2011, 04:36 PM
applejuice's Avatar
applejuice applejuice is offline
Taronyu
 
Join Date: Dec 2010
Location: In the end of the world
Posts: 363
Default

I don't think we should grant a robotic entity learning skills. If such entity can learn good things, it is also probable able to learn bad things. In short, it can take a "bad" decision.

Apart from that, and correctly appointed by Aquaplant, we are educated and obliged to obey Law or severe punishment would be inflicted on us (if Law Enforcement discovers such, ). Of course we can break the Law but we are conscious of the consequences. The same cannot be applied to robotic entities, at least, not now.
__________________
Reply With Quote
  #22  
Old 11-28-2011, 05:01 PM
Clarke's Avatar
Clarke Clarke is offline
Karyu
 
Join Date: Jul 2011
Location: Scotland, 140 years too early
Posts: 1,330
Default

Quote:
Originally Posted by applejuice View Post
I don't think we should grant a robotic entity learning skills. If such entity can learn good things, it is also probable able to learn bad things. In short, it can take a "bad" decision.
I'd ask, "What is machine learning? " but I think you already know the answer. Anyway, look at me still taking when there's science to do.

IOW, too late.
__________________
Reply With Quote
  #23  
Old 11-29-2011, 01:23 AM
applejuice's Avatar
applejuice applejuice is offline
Taronyu
 
Join Date: Dec 2010
Location: In the end of the world
Posts: 363
Default

Going a bit off thread, this shows that the decision making process involves, inevitably, the influence of the basic instructions given to an autonomous entity. Write good code and you'll get good results, write ambiguous code and you'll have erratic results; write bad code, prepare for hell!!!
Unless the entity acquires consciousness, then I do not what to say.
__________________
Reply With Quote
  #24  
Old 11-29-2011, 01:32 AM
Human No More's Avatar
Human No More Human No More is offline
Toruk Makto, Admin
 
Join Date: Mar 2010
Location: In a datacentre
Posts: 11,726
Default

Actually, not only are you using a fallacy (The Logical Fallacy of Generalization from Fictional Evidence - Less Wrong), but the thread was never about robots harming humans, but simply replacement of humans with them, which actually REDUCES harm since a guard now can't beat a prisoner up because the prisoner said something they disliked (and also means that if prisoners start beating each other up or stab someone, it will get noticed quickly and dealt with).
__________________
...
Reply With Quote
  #25  
Old 11-29-2011, 03:28 PM
applejuice's Avatar
applejuice applejuice is offline
Taronyu
 
Join Date: Dec 2010
Location: In the end of the world
Posts: 363
Default

Interesting reading. I found the probability distribution in Science Fiction part particularly controversial, considering one can only say that probability distributions are just that: probability. Current modern physics predicted that the speed of light was the ultimate limit of speed in the universe and yet, scientists measured non-zero mass particles travelling faster than light (not once but twice!). The probability of breaking the speed of light with non-infinite energy and mass is nearly zero, in practical terms, zero.

Anyway, speculation will always be a part of development, based on fiction or not.

Back on topic, it is still to be seen whether the robots will be helpful or not, but certainly the costs of using them will have a definitive role in deciding their fate. Even if the robots prevent unnecessary exposure of guards to violent criminals, eventually, a guard will have to deal with those problems.
__________________
Reply With Quote
  #26  
Old 11-29-2011, 03:52 PM
Clarke's Avatar
Clarke Clarke is offline
Karyu
 
Join Date: Jul 2011
Location: Scotland, 140 years too early
Posts: 1,330
Default

Quote:
Originally Posted by applejuice View Post
The probability of breaking the speed of light with non-infinite energy and mass is nearly zero, in practical terms, zero.
In theoretical terms it is also zero; those experiments have not been verified as been correctly performed.
__________________
Reply With Quote
  #27  
Old 11-29-2011, 04:02 PM
applejuice's Avatar
applejuice applejuice is offline
Taronyu
 
Join Date: Dec 2010
Location: In the end of the world
Posts: 363
Default

Quote:
Originally Posted by Clarke View Post
In theoretical terms it is also zero; those experiments have not been verified as been correctly performed.
Shhh, don't spoil it!
EDIT: Actually, what was informed was that they repeated the experiment with the suggested corrections made by peers after the shock of the initial results.
__________________

Last edited by applejuice; 11-29-2011 at 04:04 PM.
Reply With Quote
  #28  
Old 12-01-2011, 02:24 PM
auroraglacialis's Avatar
auroraglacialis auroraglacialis is offline
Tsulfätu
 
Join Date: Apr 2010
Location: Central Europe
Posts: 1,610
Default

Quote:
Originally Posted by Human No More View Post
I assume now that this was directed mostly at me as I brought up SciFi?
Quote:
Originally Posted by auroraglacialis View Post
There is to my knowledge no inherent reason why this would produce any of these desired outcomes. This has been dealt with in many Sci Fi novels. You run into two main problems there - the possibility of self-awareness of AIs at which point it becomes a philosophical/ethical problem and the unpredictability of behaviour of such systems - e.g. how would you ensure that an AI that was created by a learning process will obey the rules any more than you can ensure that for a human being. Even if you take Asimovs laws of robotics and somehow enforce them in such an AI, that as well can lead to undesired outcomes
I read your "fallacy" article and it was interesting but I think it is valid to use SciFi novels as a reference to what others have thought of before. I would not conclude from it that it has to be so, but in some cases at least those are really good researched novels. Things like "Terminator" and other movie or cheap SciFi stuff is mostly an exciting story, but good novels (something that has become a rarity these days) actually look at the problems of society, technologies and explore them in a fictional context. But besides that, I think to use a term like "Asimovs laws of robotics" was in this case on my side more of a placeholder for any arbitrary set of rules that fulfil the same task as those do in his novels. The statements I used to a large part are also valid without specific SciFi references. Mostly the unpredictability of AIs which even was mentioned in that article. I see this as a problem. Also I think the general principles of AIs becoming self-aware or being bound to some human-made specific rules exist besides any SciFi explorations of how this could play out specifically.
SciFi stories are just that - stories, narratives and explorations of possibilities. They are also metaphors or placeholders for problems.
This is true for the problematic side (e.g. "Robot overlords") as well as for the romantic side (e.g. "benevolent Techno-Gaia").

But I know that prison guards on wheels are very far from that problem, it just seems that all the topics that deal somehow with robots or computers in this forum will eventually turn out to be about AIs and some rather Sci-Fi oriented idea of some great robotic future :s

Quote:
Originally Posted by Human No More View Post
the thread was never about robots harming humans, but simply replacement of humans with them, which actually REDUCES harm since a guard now can't beat a prisoner up because the prisoner said something they disliked (and also means that if prisoners start beating each other up or stab someone, it will get noticed quickly and dealt with).
Well I started the thread, so I should know what it was about.
And I have huge issues with this. For once - if those robots are just for observing the inmates - why not simply install cameras. Why create something that essentially is a moving camera that is vulnerable to pranks? The only reason I can fathom is that the plans are to use these guards for more elaborate purposes than simply observing at a later time. Otherwise it does not make sense. And in that moment, that robot will have to carry weaponry, because thats what prison guards do.
Also by putting in a mediation between the prison guards and the prisoners, you cause all kinds of problems. It is way easier for people to press a button knowing it hurts someone elsewhere than to stare the victim in the eye and press that button.
The biggest problem I have with this is the added isolation, mechanization and frankly dehumanization of people (in this case prisoners) by replacing people with machines. In a prison, this may make sense if you regard the prisoners as evil subhumans - as part of a machine that processes them until they have to be released. I guess in US prisons that attitude is still there, but it is utterly wrong! Prisons are supposed to be "correctional facilities", not punishment houses. If the purpose of prisons would be to punish people and to lock them away so they cannot do any more harm, we're reverting to the early 20th century. Instead a prison should have the goal to create persons that can later again enter society. This is why they should not just be given the chance to educate themselves, have therapy if needed, do exercise and have the chance to be apprentices - but also to provide a human context. After all, the guards are the only people from the outside world that those prisoners meet on a regular basis - if you isolate them even more, they will go more insane because the only people they see for many years are other inmates. What you produce then socially are people who are incapable of dealing with finding a place in society. It may save the guards some time walking through corridors - economically it may make sense, but my concern is never about economics, it is always about the people.
__________________
Know your idols: Who said "Hitler killed five million Jews. It is the greatest crime of our time. But the Jews should have offered themselves to the butcher's knife. They should have thrown themselves into the sea from cliffs.". (Solution: "Mahatma" Ghandi)

Stop terraforming Earth (wordpress)

"Humans are storytellers. These stories then can become our reality. Only when we loose ourselves in the stories they have the power to control us. Our culture got lost in the wrong story, a story of death and defeat, of opression and control, of separation and competition. We need a new story!"
Reply With Quote
  #29  
Old 12-01-2011, 05:46 PM
Aquaplant Aquaplant is offline
Tsamsiyu
 
Join Date: Mar 2010
Posts: 690
Default

Quote:
Originally Posted by auroraglacialis View Post
But I know that prison guards on wheels are very far from that problem, it just seems that all the topics that deal somehow with robots or computers in this forum will eventually turn out to be about AIs and some rather Sci-Fi oriented idea of some great robotic future :s
And as long as I'm around, they will continue to turn out like that, because as I've said, I need my robots, because needing humans is unethical.
Reply With Quote
  #30  
Old 12-01-2011, 07:32 PM
iron_jones's Avatar
iron_jones iron_jones is offline
Olo'eyktan
 
Join Date: Aug 2010
Posts: 2,907
Default

Quote:
Originally Posted by Aquaplant View Post
needing humans is unethical.
Why the hate, bro
__________________



Misery Forever.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


Visit our partner sites:

   



All times are GMT +1. The time now is 11:21 PM.

Based on the Planet Earth theme by Themes by Design


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2022, vBulletin Solutions, Inc.
All images and clips of Avatar are the exclusive property of 20th Century Fox.