Why challenge the creation of a legal personality specific to robots?

For a long time, lawyers have been fighting against the idea of ​​giving robots a
juridic people. This rejection was expressed, inter alia, in the summary report
France artificial intelligence #FranceIA, fruit of the work of the group that I directed on the
legal issues of AI.
First, all the experts who actually work on AI know that the concept of AI
“strong” as it is presented today is a myth. The strong AI refers to a machine
who would have the ability to produce intelligent behavior and to experience a conscience
of self, allowing the machine to understand what it does. The state of research
in this area shows that we are still far from having developed such a
intelligence, some even think that it will never emerge. However, the recommendation of
European Parliament almost takes the existence of a “strong” IA as short-term
term, whereas for the time being it has more to do with science-fction than with reality.
This recommendation therefore contributes to the creation of a false perception of AI by the

Then, legally, the creation of a legal personality of the robot is an aberration.
The idea would be to give the robot an identity and social capital (like a company)
which would allow him, in particular, to face an action in responsibility. This poses a
problem with regard to the heritage of the robot that would be attached to it and that it would therefore
abound for the performance of the repair debt. In addition, the creation of such a
personality is inappropriate because, for the most part, the current rules of liability
work with the robot because of its capabilities and its current autonomy. The
responsibility for things actually applies to the user of the robot, which has
his guard (use, direction and control of the thing).
Practical evolution has led to the emergence of the concept of “intellectual”
where it is the orders given by the user that materialize it. If the user orders
voluntarily to his robot to perform potentially damaging acts, he
will be responsible. Similarly, the user will in any case be expected to control the
characteristics of the robot and anticipate the harmful consequences of its use.
As such, the civil liability reform project presented on March 13, 2017 by the
des Sceaux, proposes a recasting of article 1243 of the Civil Code relating to the
do things. It can be considered that this project validates the concept of “intellectual”
removing liability for animals (whose owner is responsible, even
in the event that the animal is lost or has escaped) that it integrates with this responsibility
general because of things.

Beyond the user, the responsibility of the robot manufacturer is also applicable
with the rules for defective products. The manufacturer of the robot will start
liability as a producer in the event of a defect in the product which “does not offer the
which can legitimately be expected […]. “In the event that the damage
is caused by a defect inherent in the robot, not caused by the user himself, the manufacturer
will be applied this regime of responsibility.
These examples show that the creation of a legal personality of the robot, having
object of direct responsibility for that object of law would be to
disempower the owners and manufacturers of robots by setting up a screen
legal barrier to the fulfillment of their responsibilities.