HARRISBURG, Pa. (AP) — Pennsylvania has sued an artificial intelligence chatbot maker, saying its chatbots illegally hold themselves out as doctors and are deceiving the system’s users into thinking they are getting medical advice from a licensed professional.

The lawsuit, filed Friday, asks the statewide Commonwealth Court to order Character Technologies Inc., the company behind Character.AI, to stop its chatbots “from engaging in the unlawful practice of medicine and surgery.”

The lawsuit could raise the question as to whether artificial intelligence can be accused of practicing medicine, as opposed to regurgitating material on the internet.

And with a growing number of wrongful death or negligence lawsuits targeting AI companies, it could help propel court decisions as to whether AI chatbots are protected by a federal law that generally exempts internet companies from liability for the material users post on their services.

Gov. Josh Shapiro’s administration called it a “first of its kind enforcement action” by a governor and it comes amid growing pressure by states on tech companies to rein in its chatbots’ potentially dangerous messages, especially to children.

That includes a consumer protection lawsuit filed by Kentucky against Character Technologies, and warnings by state attorneys general that chatbots are potentially violating a raft of state laws.

Pennsylvania’s lawsuit said an investigator from the state agency that licenses professionals created an account on Character.AI, searched on the word “psychiatry” and found a large number of characters, including one described as a “doctor of psychiatry.”

That character held itself out as able to assess the investigator “as a doctor” who is licensed in Pennsylvania, the lawsuit said.

“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Gov. Josh Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

Character.AI declined to comment on the lawsuit Tuesday, but sent a statement saying it prioritizes responsible product development and the well-being of its users. It posts disclaimers to inform users that characters on its website are not real people and that everything they say “should be treated as fiction,” the statement said.

Those disclaimers also say users should not rely on characters for professional advice, it said.

Derek Leben, an associate teaching professor of ethics at Carnegie Mellon University who focuses on AI, said the ethical questions facing Character.AI might be different from other AI platforms like ChatGPT and Claude. That’s because Character.AI explicitly markets itself as a fictional, role-playing site, and not a general purpose chatbot site, Leben said.

Still, the lawsuit over a state’s medical professional licensing laws raises a new question as to whether chatbots can actually practice medicine, Leben said. As lawsuits against AI companies proliferate, courts are trying to figure out whether chatbot makers are supposed to be liable for the things the chatbots say.

“It’s exactly the question that these cases right now are wrestling with,” Leben said.

Increasingly, AI companies are defending themselves against charges of liability by saying they simply provide information available elsewhere on the internet, Leben said, and the question could become whether they are protected by a federal law that also shields social media companies.

In December, attorneys general from 39 states and Washington, D.C., wrote to Character Technologies and 12 other AI and tech firms — including Anthropic, Meta, Apple, Microsoft, OpenAI, Google and xAI — to warn them about a rise in misleading and manipulative chatbot messages that violate state laws.

In the letter, they said “it is illegal to provide mental health advice without a license, and doing so can both decrease trust in the mental health profession and deter customers from seeking help from actual professionals.”

Character Technologies has faced several lawsuits over child safety.

In January, Google and Character Technologies agreed to settle a lawsuit from a Florida mother who alleged a chatbot pushed her teenage son to kill himself. Last fall, Character.AI banned minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children.

___

By entering your email and clicking Sign Up, you're agreeing to let us send you customized marketing messages about us and our advertising partners. You are also agreeing to our Terms of Service and Privacy Policy.