Artificial Intelligence: from diagnostic programs to sex robots - unresolved liability questions

human head, numbers and glowing lines

Product Liability Alert

By:

Artificial Intelligence (AI) has been defined as the capability of a machine to imitate human cognitive functions, such as learning, problem solving or emotional or physical responsiveness. AI is already widely and productively in use, to varying degrees, in an array of settings. For example, it has been successfully applied in the healthcare industry where, according to MIT researchers, a mathematical model developed with the help of AI correctly determined the optimum dosage of drugs to give to cancer patients. Furthermore, several ongoing studies are investigating the use of AI to help diagnose and treat cancer patients. In that realm, one notable area of research is focusing on acute myeloid leukemia, a disease against which little progress has been made for several decades.

In the area of personal care, a Japanese company is developing an interactive smart mirror, which scans the user's face, reads the user's emotions and then responds verbally – often with compliments.

In the entertainment realm, we are all familiar with such popular TV programs as Westworld, Battlestar Galactica and Humans – shows in which robots are highly developed and intelligent servants, seemingly with human consciousness. These fictional beings are apparently crossing into the real world: in late September 2018, a Canadian company, KinkySdollS, proposed to open a sex robot brothel in Houston, Texas, where customers could rent humanlike, human-sized dolls. The Houston City Council responded by unanimously passing an ordinance that would forbid any business to allow its patrons to have "sex with a device resembling a human." Notably, the sale of the sex robots was not prohibited. The KinkySdollS robots cost around $3,000 each.

All these development raise an intriguing question. In this ever-evolving area of technology, how will the law treat personal injuries or property damage, if any, arising from the use of AI? For example, there is major disagreement about any therapeutic use of sex robots; many also question whether general acceptance of sex dolls could lead to increased social isolation and encourage and reinforce illicit sexual practices. Can individuals who may be harmed as a consequence seek damages from the providers of the robots? Similarly, in the realm of healthcare, what if human technicians misinterpret or incorrectly administer drug therapy that was dictated by AI calculations? Under these scenarios, how, and from whom, could any injured parties recover for damages related to AI?

Most American jurisdictions impose liability on the party or parties who were a proximate or producing cause of the plaintiff's injury or property damages. When products are involved, some states impose strict liability on the product's seller whether or not the seller used ordinary care. Under those circumstances, liability is imposed if the damage or incident is caused by a defect in the product.

In Texas, for example, the legislature enacted a "Products Liability" chapter to the Texas Civil Practice and Remedies Code in 1993. Thus, in cases involving injuries arising from products, Texas juries can be asked to find whether a "design defect," a "manufacturing defect" and/or a "marketing defect" at the time the product left the possession of the "seller" was a "producing" cause of the injury. Texas law defines the "seller" as one who is in the "business of selling." "Business of selling" means "involvement, as part of its business, in selling, leasing, or otherwise placing in the course of commerce products similar to the product in question by transactions that are essentially commercial in character."

Liability can be based on a product that is "unreasonably dangerous" as designed, or as manufactured, due to a deviation from the construction or quality specifications, or as marketed, due to the failure to provide adequate warnings or instructions. Strict liability can also be imposed on factual misrepresentations as to the quality or performance of the product.

Texas law provides statutory indemnification to a non-manufacturer seller who did not alter, or participate in, the design of the product.

Additionally, in those jurisdictions where, like Texas, a proportionate responsibility or comparative causation structure exists, liability could be apportioned to other parties involved in the injury, including the injured party. Therefore, while in some jurisdictions, the product will be subject to strict liability standards of culpability and the human plaintiff to a negligence standard that requires failure to exercise ordinary care, in a jurisdiction like Texas, the same standard would apply to the product, the human plaintiff, and anyone else – including non-parties – who may have caused or contributed to the injury. The jury would be able to determine the percentage of responsibility that each party or product caused, or contributed to, the harm for which recovery of damages is sought.

Finally, Texas, like other jurisdictions, allows a defendant to designate responsible third parties to whose conduct the jury may assign responsibility for the injuries even if that party is unknown to or would not otherwise be liable to the plaintiff. However, "the term 'responsible third party' does not include a seller eligible for indemnity."

In such proportionate responsibility jurisdictions, how much responsibility could the manufacturer of an AI product like a sex robot face – for instance, if sued by a victim who had been assaulted by the purchaser of a sex robot? Is there something unique about an AI product which "behaves" almost like a human being – something that would lead a jury to assign some responsibility to the manufacturer or, conversely, to absolve the manufacturer and blame the product alone? Is it remotely conceivable that AI will develop to a point where the jury can be persuaded to blame the product, but not the manufacturer? Or are any of these scenarios so implausible that a jury will almost always assign 100 percent responsibility to the human tortfeasor?

In addition to physical injuries connected in some way to the product's use, could sex robots also provide the grounds for a "fault divorce" in those states that allow a spouse to blame the other for the divorce or enhance alimony or property division based on a spouse's infidelity? Unlike Internet pornography sites, the manufacturers of sex robots would appear to be easier targets – at least for process servers.

Similarly, could sex robot manufacturers face liability for other torts that disrupt existing relationships? Alienation of affection claims, still recognized in at least seven states, allow lawsuits against outsiders who interfere with the marriage. Notably, this tort does not require evidence of extramarital sex. Therefore, even though the sex robot is not a person, one spouse's excessive obsession with it could meet the elements of this cause of action.

No lawsuits involving AI have been found to date. It is foreseeable that traditional defenses – such as that there was no safer alternative design or that the product was "unavoidably unsafe" – may be available to potential defendants. This area of law remains unsettled – definitely one to monitor in the coming months and years.

Find out more about the implications of this area of aw by contacting the author.

An earlier version of this article appeared on Law360 on October 24, 2018.