If you like to play against real dealers, you often spend time at live casino Canada sites. Well, what would you think if you were told that in the near future, there would be a possibility of playing against a robot instead of a real human? You would undoubtedly get a different experience, but such a situation also raises an obvious question. What if the robot makes a mistake? Contrary to popular belief, robots often make errors. A robotic arm struck a worker at a car factory in Germany, killing him. A patient died in a surgery performed with a robot in England. In America, an autonomous car crashed into a pedestrian, killing him. We could multiply examples. As we continue to use robots, the number of these accidents will increase. So, who will bear the legal and criminal responsibility?
Robots without an AI
To answer this question, we need to start with the basics. Robots that are not managed by an AI, meaning that they cannot make independent decisions on their own, are behaving according to their programming. We mentioned above that a robotic arm slammed into a worker, causing his death. In such a scenario, it is not possible for the robot to have any civil or criminal liability. If it does what it is programmed to do, and if its program fails, “humans” are responsible for it. A robot is just a tool.
There is no law on what to do in such a situation because the laws of every country in the world were written before robots. However, it is possible to apply the existing legal regulations to such scenarios. For example, almost every country has a legal regulation that states that if damage has occurred due to an “item,” this damage must be covered by the owner of the item. If you fly your drones without authorization and violate people’s privacy, you have legal and criminal responsibility as the drone owner. You can’t blame the robot. Therefore, although there is no specific legal regulation regarding the responsibility of robots, we can determine who will be responsible for the mistakes of robots. Robots, in turn, cannot act and take decisions on their own, using existing laws. The owner or programmer of that robot is the responsible person.
Robots with an AI
But things change when it comes to AI-powered robots that can make independent decisions and act on their own. It changes so much that we are now starting to talk about “science fiction.” First of all, let’s say that there is no legal regulation that foresees such a situation. Even the most developed country in the world does not have a law about who will be responsible if a robot with artificial intelligence commits a crime. In any case, this situation is now more in the field of legal philosophy: there are many ideas, but no one has any idea which one should be applied.
The basis of criminal law is not only punishment but also setting an example. In other words, the punishment of the criminal is also a preventive measure. People can stop committing crimes in order not to fall into the same situation. So, can robots do that too? Would punishing a robot with artificial intelligence set an example for other robots?
We don’t know if Isaac Asimov predicted this day when he wrote the three robot laws years ago, but these laws may be the most logical solution. According to these laws, a robot may not injure or let a human be harmed. A robot must obey a human’s orders as long as they do not conflict with the first rule. A robot is obliged to protect its own existence as long as it does not conflict with the first and second rules. Maybe all we need is to include them in the robots’ program, who knows?
Can an AI robot predict that its actions will cause harm? If so, is it possible to be held responsible for this fault? But what if a robot acts deliberately and intends to cause harm? People analyze the data they collect from the environment by processing them in their brains and make a decision. As a result of these decisions, it is possible for them to be held responsible for their faults and intentions. We don’t know how the brains of AI robots work. Their programs may give them a very different perspective. Perhaps there is no need to punish robots who make mistakes: it will be enough to tell them that what they did was wrong. Because whether the robot can even comprehend the meaning of “punishment” is another matter of discussion. In short, nobody has an idea what to do if a robot with artificial intelligence commits a crime or makes a mistake. There are just lots of ideas.
Discover more from Today Headline
Subscribe to get the latest posts to your email.