Regulating AI, they did hurl.
But the lawmakers found,
Their knowledge was unsound,
As algorithms danced and twirled.
The beauty of that limerick is that it was written entirely by the artificial intelligence program ChatGPT, just now, after I asked it to write one about Congress’ attempts to regulate AI. The program took less than five seconds to spit out the composition.
I kind of like it — the way it captures the weaknesses of a representative democracy trying to police and regulate a technology that dances and twirls around us, growing stronger each minute.
View it just a little differently, however, and it reads as if it is sticking its artificial tongue at us all, menacingly.
Others, like Eliezer Yudkowsky, a decision theorist, believe mankind’s only hope for survival is to shut it all down, and the sooner the better. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote for TIME magazine in March.
“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen,’” he wrote.
Meanwhile, in a separate, but related, issue, the state of Montana just passed a law banning platforms such as Apple or Google from making the popular social media program TikTok available for state residents to download.
Politicians from both major parties have argued that TikTok’s Chinese ownership could be a national security risk. Also, it has been linked to mental health problems in young people. Rep. Mike Gallagher, R-Wisconsin, called it “digital fentanyl” on a recent “Meet the Press” episode.
Social media and artificial intelligence both have been described as threats to health and freedom.
The truth is, humans are especially bad at identifying crossroads in technology. Few people noticed when the Wright brothers perfected flight, ushering in everything from high-speed travel to fresh Alaskan salmon in Utah. When the Russians launched the first satellite, Sputnik, in 1957, U.S. military officials scoffed that it meant nothing because satellites “could not be used to drop atomic or hydrogen bombs or anything else on the Earth...."
Even in 2007, a lot of folks missed the significance of the first iPhone. And less than a decade ago, experts were predicting entirely self-driving cars by 2020.
But that doesn’t mean we shouldn’t pay attention to what’s happening in the cyber world right now, and it doesn’t mean we should reject calls for sensible government regulations.
Emphasis on the word “sensible.”
The Wall Street Journal this week noted that some Republicans in Congress oppose regulating AI because it would create a new bureaucratic regulatory body enforcing rules that haven’t yet been written, thus stifling innovation. But with the technology capable of doing everything from fabricating convincing false videos to creating deadly biological weapons, it may be a good idea to set some rules.
The folks behind the technology believe this. Mira Murati, chief technology officer at OpenAI, told TIME the technology can be hijacked by bad actors.
“This is a unique moment in time where we do have agency in how it shapes society,” she said. “And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. … how (do) you make sure it’s aligned with human intention and ultimately in service of humanity?”
How, indeed. Freedom has always required a good dose of self-control and ethical conduct, as well as a large amount of patience for those who will take advantage of it.
John Milton, a 17th century poet, author and inspiration for the Constitution’s First Amendment, said “we do injuriously, by licensing and prohibiting, to misdoubt” the strength of truth. “Let her and falsehood grapple; who ever knew truth put to the worst in a free and open encounter?”
That doesn’t mean we shouldn’t write laws to punish the misuse of powerful technology, or deliberate falsehoods.
Some experts believe Montana’s TikTok law will be found unconstitutional. And any regulatory agency setting rules for AI will have to tread delicately around free speech rights.
But that doesn’t absolve us from setting those rules, negating real risks to our existence, involving scientists who know more than lawmakers with unsound knowledge and keeping a wary eye on this new technology as its algorithms dance and swirl around us.
Finally, I asked ChatGPT whether it thinks it ought to be regulated.
“The need for regulation of ChatGPT and similar AI models should be carefully considered,” it said, “taking into account potential risks, societal impact, and the balance between innovation and ethical concerns.”
Maybe it should stick to writing limericks.