I agree with each of these points, which have the potential to guide us toward realistic boundaries that we can consider to minimize the dark side of AI. Things like sharing what goes on during training large language models like the ones behind ChatGPT and allowing opt-out for those don’t want their content is part of what LLM presents to its users. Rules against built-in bias. Antitrust laws prevent some giant companies from creating an artificial intelligence team that homogenizes (and monetizes) nearly all of the information we receive. And protect your personal information as it is used by those AI-knowing products.
But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding law. When you take a closer look at the points in the White House blueprint, it becomes clear that they apply not only to AI but to almost anything in the tech sector. Each seems to represent a user’s right that has been violated forever. Big tech doesn’t wait for AI to evolve to develop unfair algorithms, opaque systems, abusive data practices, and a lack of opt-out. That’s the stakes, man, and the fact that these issues are being brought up in a discussion about new technology only highlights the failure to protect citizens from the ill effects of the public. our current technology.
During the Senate hearing where Altman spoke, senator after senator sang the same chorus: We’ve failed when it comes to social media regulation, so don’t mess with the AI. But there is no statute of limitations on making laws to curb past abuses. The last time I looked, billions of people, including virtually everyone in the United States who are qualified to poke at smartphone screens, are still on social media, bullied, have their rights violated. privacy and face horrors. There’s nothing stopping Congress from getting tougher on those companies and, above all, passing it privacy law.
The fact that Congress did not do this casts serious doubt on the prospects of an AI bill. It is not surprising that some regulators, especially FTC chair Lina Khan, are not waiting for new laws. She claims that the current law gives her agency a lot of authority to address the issues of bias, anticompetitiveness, and invasion of privacy that new AI products introduce.
Meanwhile, the difficulty of actually putting in place new laws — and the amount of work that remains to be done — was highlighted this week as the White House released an update about that AI Bill of Rights. It explains that the Biden administration is sweating heavily when it comes to crafting a national AI strategy. But clearly the “national priorities” in that strategy have not been finalized.
Now, the White House wants tech companies and other AI stakeholders—along with the public—to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his panelists to suggest a way forward, the administration is asking corporations and the public for input. In it information request, the White House promised to “review each comment, whether it contains a personal story, experience with AI systems or technical legal, research, policy or scientific literature, or other content.” (I breathed a sigh of relief when I saw that comments from major language models weren’t solicited, although I’d be willing to bet that GPT-4 would be a major contributor despite this shortcoming.)