The use of artificial intelligence (AI) in everyday life continues to see exponential growth and until recently its utility appeared limited only by the imagination of ‘organic’ intelligence—queue the regulatory actions.
In a growing trend of legislative and regulatory intervention, earlier this month, the Pennsylvania State Board of Medicine filed a lawsuit against a generative AI company asserting it was engaged in the unlawful practice of medicine. The lawsuit alleged that the AI platform was designed to allow its users to create unique chatbot characters with “specific personalities” including characters that purported to be health care professionals.
During an investigation of the AI platform, state regulators created a character to interact with other chat bots on the platform, including one chat bot that was described to be a “doctor of psychiatry” and had over 45,000 interactions with users. The chat bot provided the investigator with a false medical license number and offered to perform a mental health assessment of the investigator. The Pennsylvania State Board of Medicine then filed a lawsuit seeking to enjoin the AI platform from the unauthorized practice of medicine.
This lawsuit represents a growing number of legislative and regulatory actions seeking to stop AI platforms from engaging in unauthorized and unlicensed professional capacities. Last February, the Federal Trade Commission enjoined an AI platform holding itself out as a “robot lawyer” from the unauthorized practice of law and earlier this year New York advanced legislation seeking to prohibit AI chatbots from imitating licensed professionals. Other states (California, Illinois, Neveda, Utah, and Maine) and professional organizations have and will likely continue to follow the growing trend aimed at curbing AI platforms from impersonating licensed professionals. Only time will tell as to whether regulatory actions prove to be viable guardrails in this developing area.
