![]() |
|
#16
|
||||
|
||||
|
not necesary at all. just implement this
__________________
There are many dangers on Pandora, and one of the subtlest is that you may come to love it too much. ![]() |
|
#17
|
||||
|
||||
|
I thought the robots' AI was going to be a more advanced than just surveillance. Bribes and so might be more difficult with these guys around, but as long as the human factor be present in the operation of the robot, corruption will always emerge eventually. The only way to have an incorruptible policeman is to design a robot/human programmed to obey the law... no exceptions. But, would we want an entity that executes the Law no matter the cost?
__________________
|
|
#18
|
||||
|
||||
|
Yay Korea
I hope they get their "guards", I'd like to see how it turns out.
|
|
#19
|
||||
|
||||
|
Quote:
Anyways I think there is a problem with AIs. A hope would be that they re rational, incorruptible and follow the rules 100%. The way it looks now, AIs are less likely to come from programming a set of rules and responses into a computer, but rather from creating learning machines, essentially neuronal networks that in a way mimic biological brains. There is to my knowledge no inherent reason why this would produce any of these desired outcomes. This has been dealt with in many Sci Fi novels. You run into two main problems there - the possibility of self-awareness of AIs at which point it becomes a philosophical/ethical problem and the unpredictability of behaviour of such systems - e.g. how would you ensure that an AI that was created by a learning process will obey the rules any more than you can ensure that for a human being. Even if you take Asimovs laws of robotics and somehow enforce them in such an AI, that as well can lead to undesired outcomes. As usually thei thread deviates a lot from the OT, which was about the use of robotic prison guards and the potential of semi-autonomous robots like this to be used in ways that harm people. They are used as soldiers already, but I dont want to see them in prisons, schools, retirement homes or on the streets fighting uprisings or protests.
__________________
Know your idols: Who said "Hitler killed five million Jews. It is the greatest crime of our time. But the Jews should have offered themselves to the butcher's knife. They should have thrown themselves into the sea from cliffs.". (Solution: "Mahatma" Ghandi) Stop terraforming Earth (wordpress) "Humans are storytellers. These stories then can become our reality. Only when we loose ourselves in the stories they have the power to control us. Our culture got lost in the wrong story, a story of death and defeat, of opression and control, of separation and competition. We need a new story!" |
|
#20
|
|||
|
|||
|
Quote:
Isn't it almost equally unethical to be programmed to obey the law, as it is to be socially forced to a certain set of rules? I mean the latter is not as blatant, but it's the insidious nature of such indirect manipulation that makes it almost as bad as your average dictatorship. Quote:
Ethical problems usually arise from the treatment of life, not the creation of life itself, because that would mean that having babies is unethical, because none of those babies ever asked to be born. Does that even make sense? Quote:
|
|
#21
|
||||
|
||||
|
I don't think we should grant a robotic entity learning skills. If such entity can learn good things, it is also probable able to learn bad things. In short, it can take a "bad" decision.
Apart from that, and correctly appointed by Aquaplant, we are educated and obliged to obey Law or severe punishment would be inflicted on us (if Law Enforcement discovers such, ). Of course we can break the Law but we are conscious of the consequences. The same cannot be applied to robotic entities, at least, not now.
__________________
|
|
#22
|
||||
|
||||
|
Quote:
IOW, too late.
__________________
|
|
#23
|
||||
|
||||
|
Going a bit off thread, this shows that the decision making process involves, inevitably, the influence of the basic instructions given to an autonomous entity. Write good code and you'll get good results, write ambiguous code and you'll have erratic results; write bad code, prepare for hell!!!
Unless the entity acquires consciousness, then I do not what to say.
__________________
|
|
#24
|
||||
|
||||
|
Actually, not only are you using a fallacy (The Logical Fallacy of Generalization from Fictional Evidence - Less Wrong), but the thread was never about robots harming humans, but simply replacement of humans with them, which actually REDUCES harm since a guard now can't beat a prisoner up because the prisoner said something they disliked (and also means that if prisoners start beating each other up or stab someone, it will get noticed quickly and dealt with).
__________________
... |
|
#25
|
||||
|
||||
|
Interesting reading. I found the probability distribution in Science Fiction part particularly controversial, considering one can only say that probability distributions are just that: probability. Current modern physics predicted that the speed of light was the ultimate limit of speed in the universe and yet, scientists measured non-zero mass particles travelling faster than light (not once but twice!). The probability of breaking the speed of light with non-infinite energy and mass is nearly zero, in practical terms, zero.
Anyway, speculation will always be a part of development, based on fiction or not. Back on topic, it is still to be seen whether the robots will be helpful or not, but certainly the costs of using them will have a definitive role in deciding their fate. Even if the robots prevent unnecessary exposure of guards to violent criminals, eventually, a guard will have to deal with those problems.
__________________
|
|
#26
|
||||
|
||||
|
In theoretical terms it is also zero; those experiments have not been verified as been correctly performed.
__________________
|
|
#27
|
||||
|
||||
|
Quote:
![]() EDIT: Actually, what was informed was that they repeated the experiment with the suggested corrections made by peers after the shock of the initial results.
__________________
Last edited by applejuice; 11-29-2011 at 04:04 PM. |
|
#28
|
||||
|
||||
|
Quote:
Quote:
SciFi stories are just that - stories, narratives and explorations of possibilities. They are also metaphors or placeholders for problems. This is true for the problematic side (e.g. "Robot overlords") as well as for the romantic side (e.g. "benevolent Techno-Gaia"). But I know that prison guards on wheels are very far from that problem, it just seems that all the topics that deal somehow with robots or computers in this forum will eventually turn out to be about AIs and some rather Sci-Fi oriented idea of some great robotic future :s Quote:
And I have huge issues with this. For once - if those robots are just for observing the inmates - why not simply install cameras. Why create something that essentially is a moving camera that is vulnerable to pranks? The only reason I can fathom is that the plans are to use these guards for more elaborate purposes than simply observing at a later time. Otherwise it does not make sense. And in that moment, that robot will have to carry weaponry, because thats what prison guards do. Also by putting in a mediation between the prison guards and the prisoners, you cause all kinds of problems. It is way easier for people to press a button knowing it hurts someone elsewhere than to stare the victim in the eye and press that button. The biggest problem I have with this is the added isolation, mechanization and frankly dehumanization of people (in this case prisoners) by replacing people with machines. In a prison, this may make sense if you regard the prisoners as evil subhumans - as part of a machine that processes them until they have to be released. I guess in US prisons that attitude is still there, but it is utterly wrong! Prisons are supposed to be "correctional facilities", not punishment houses. If the purpose of prisons would be to punish people and to lock them away so they cannot do any more harm, we're reverting to the early 20th century. Instead a prison should have the goal to create persons that can later again enter society. This is why they should not just be given the chance to educate themselves, have therapy if needed, do exercise and have the chance to be apprentices - but also to provide a human context. After all, the guards are the only people from the outside world that those prisoners meet on a regular basis - if you isolate them even more, they will go more insane because the only people they see for many years are other inmates. What you produce then socially are people who are incapable of dealing with finding a place in society. It may save the guards some time walking through corridors - economically it may make sense, but my concern is never about economics, it is always about the people.
__________________
Know your idols: Who said "Hitler killed five million Jews. It is the greatest crime of our time. But the Jews should have offered themselves to the butcher's knife. They should have thrown themselves into the sea from cliffs.". (Solution: "Mahatma" Ghandi) Stop terraforming Earth (wordpress) "Humans are storytellers. These stories then can become our reality. Only when we loose ourselves in the stories they have the power to control us. Our culture got lost in the wrong story, a story of death and defeat, of opression and control, of separation and competition. We need a new story!" |
|
#29
|
|||
|
|||
|
Quote:
|
|
#30
|
||||
|
||||
|
__________________
![]() ![]() ![]() ![]() Misery Forever. |
![]() |
|
|