Imagine that in 2023, the self-driving car shuttled through the streets. If a traffic accident happens unfortunately, the pedestrian will be killed and the media will rush to report. This lawsuit will surely cause widespread concern, but what is the basis for how to try it?
Today, John Kingston of the University of Brighton in the UK gives an answer, but some are unsatisfactory. He puts forward the theory of criminal responsibility in the field of AI, and points out that key issues such as changing attitudes and cautious responses in the fields of automation, computer and law.
The core of the debate is whether the artificial intelligence system can bear the criminal responsibility. Kingston said that Gabriel Hallevy of the Olympus Academy of Israel conducted an in-depth study of the issue.
It is often necessary to clarify the conduct and intent of the criminal liability (the legal terms are actus rea and mens rea). Kingston said that Hallevy portrayed three criminal offences that the AI ​​system might involve.
The first case is an indirect crime, and the perpetrator is a person or animal with mental defects, and such perpetrators are not subject to legal liability. However, anyone who manipulates a mentally ill person or animal to commit a crime must bear the corresponding criminal responsibility. For example, the dog owner orders the dog to attack others.
Another example is the designer and user of the smart machine. Kingston said: "The AI ​​project may be used as an innocent tool, and the software programmer or user who uses it to commit crimes is the mastermind of indirect crime."
The second case is force majeure and accidents, which refers to crimes caused by improper use of the AI ​​system. Kingston used the example of intelligent robot killing in the Japanese motorcycle factory as an example. “The robot mistaken workers for dangerous objects, adopts the most effective way to eliminate threats, and waves powerful hydraulic cantilevers to push unsuspecting workers into the machine. , quickly ended his life and then continued to perform work tasks."
The key question in this case is whether the machine's programmer is aware of the functional deficiencies and possible consequences of such machines.
The third case is a direct crime with both behavior and intention. If the act of the AI ​​system or negligence does not constitute a crime, it can directly prove its criminal facts.
Kingston believes that although the criminal intention is difficult to define, it still has reference value. He said: "Speeding is a strict liability crime. Therefore, according to Hallevy, if the self-driving car is speeding, there is a legal reason to pursue the car AI technical team. Criminal liability." It seems that the owner does not have to bear responsibility.
Next, it involves criminal defense. How will the AI ​​system being investigated for criminal responsibility defend? Kingston cited several possibilities: Can AI programs emulate mental patients and defend against system failures? Or emulating human beings forced or drunk, will be attacked by computer viruses as a defense?
These kinds of defenses are not out of thin air. Kingston shared some cases in the UK. The responsible person of computer crimes successfully defended the malware on the grounds of machine infection, and said that malware should be responsible.
In one case, a young hacker accused of launching a denial of service attack argued that the Trojans launched the attack and that the program cleared the traces of the crime before the police intervened.
Finally, it involves the type of penalty. Since the AI ​​system is the direct responsible party for criminal acts, who should accept the penalty? These issues are still unknown.
However, if the AI ​​system violates civil law, it should not be held criminally responsible. Then the question comes, is the AI ​​system a service or a product?
If the AI ​​system is considered a product, the product design regulations should be used to process the case based on the information provided by the product warranty.
If it is treated as a service, there is a negligence infringement. In this case, the plaintiff usually needs to provide the three elements that constitute the fault of infringement. First, it is necessary to prove that the defendant has a duty of care. Kingston said that although the AI ​​case does not clearly define the care standard, it can usually be directly proved.
Second, it is necessary to prove that the defendant has failed. Finally, it is necessary to prove that the dereliction of duty has caused harm to the plaintiff.
More complex than these provisions, however, is that as the capabilities of AI systems become closer to humans and even beyond humans, their legal status will change.
One thing is certain: lawyers (or alternatives to their AI systems) will be involved in more and more novel case disputes in the next few years.
Pharmaceuticals,2-Methyl- Propanoic Acid Monohydrate Price,2-Methyl- Propanoic Acid Monohydrate Free Sample,Pure 2-Methyl- Propanoic Acid Monohydrate
Zhejiang Wild Wind Pharmaceutical Co., Ltd. , https://www.wild-windchem.com