The bill, called “An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT,” would put a series of protections in place, including requiring companies to disclose information about their algorithms to the Attorney General’s office, conduct regular risk assessments, and program the models to include distinctive watermarks to help detect plagiarism, according to the proposal.
The aim of the legislation is to help “protect the public’s safety, privacy, and intellectual property rights,” as newer and more powerful technologies start to emerge and become more easily accessible.
That Finegold used the program to write a bill that looks realistic is exactly the point he’s trying to get across.
“What we are trying to do is, we are trying to put up safeguards-slash-guardrails that can help this technology grow without having negative consequences,” said Finegold, who stressed that he is pro-technology.
“There are so many things this could be used for that could be used in a negative manner,” he added. “But used in the right context, it can be very powerful.”
The chatbot, which imitates human language and can create a smorgasbord of content in the blink of an eye, was publicly released in late November. It was an instant hit and has since stirred up conversations across the Internet about its many uses, with people arguing the pros and cons of its startling capabilities.
The technology is so impressive that Microsoft Corp. recently announced it’s making a “multiyear, multibillion-dollar investment” in OpenAI, the California-based research laboratory that created ChatGPT, according to the Wall Street Journal.
In December, the Globe gave it a test run by asking the chatbot to whip up a series of comical scenarios told in a traditional newspaper format, based on a series of Boston-related prompts. It didn’t disappoint.
Finegold worked with Justin Curtis, his chief of staff, on experimenting with the program to write the legislation, which he filed Friday.
So how well did the technology perform its political task?
“I thought it was pretty good. I thought it was OK,” said Finegold, chairman of the Legislature’s Joint Committee on Advanced Information Technology, the Internet, and Cybersecurity. “[ChatGPT] got us about 70 percent there.”
Curtis said it took a few tries to get the program to understand its assignment. At times, ChatGPT had difficulty cobbling together information and language that mirrored a bill written by a human being.
For instance, when Finegold and Curtis asked ChatGPT to draft a bill in the style of the Mass. General Laws, the chatbot apologized and said it was “not able to draft bills.” (Perhaps it sensed where this bill was headed.)
“It definitely required a little bit of nudging,” Curtis said. “One of the first attempts we did, it rejected the attempt altogether.”
Finegold’s office shared a series of screenshots from their attempts at getting ChatGPT to write the bill. The images show they had to supply very specific cues to get the program’s output over the finish line.
While they were impressed by the program’s speed and ability to add context of its own to the bill, Curtis said they ultimately needed to finetune the proposal themselves before the final product could pass muster on Beacon Hill.
“I think the human mind is still better,” Finegold said.
As part of the bill, ChatGPT was asked to include a line indicating that the proposal was drafted with its help. ChatGPT complied, but it couldn’t help but take what Curtis called a “little dig.”
“Any errors or inaccuracies in the bill should not be attributed to the language model, but rather to its human authors,” ChatGPT wrote, as if poking fun at Finegold and Curtis for using the program to regulate itself.
Finegold kept the line in the final version of the bill.
Steve Annear can be reached at firstname.lastname@example.org. Follow him on Twitter @steveannear.