At a pivotal moment for new forms of artificial intelligence—and the internet itself—an overwhelming number of CEOs appear to be looking for a reassessment of how these technologies are governed. But who and how that would happen remain an open question.
That was the big takeaway from an online forum Tuesday hosted by The Yale School of Management’s Jeffrey Sonnenfeld, a longtime Chief Executive columnist, to commemorate the 50th anniversary of the birth of the internet.
In three striking poll results, the 200 large-company CEOs in the audience expressed extraordinary unanimity when it comes to re-evaluating the legal protections that underpin the success of “Big Tech” platforms like Google, Facebook and TikTok.
Some 85 percent of those surveyed said they strongly agreed or agreed with the statement “I am in favor of stronger government regulation of social media platforms.” A full 100 percent of those voting said they favored the Kids Online Safety Act sponsored by senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).
Perhaps most importantly, when asked whether or not tech company protections under Section 230 of the landmark 1996 Communications Decency Act—the legal provision that has allowed a “safe harbor” when it comes to liability for what their users post on social media platforms—some 96 percent of the CEOs said they thought the law was “obsolete” at this point and needed revisiting by Congress.
The event, a semi-annual gathering of large-company CEOs, authors, investors, academics, lawmakers and technology pioneers, focused much of its attention on the question of how best to regulate—and not regulate—the digital world as it enters a new phase of its existence.
But who, how and how extensive the re-evaluation of how government patrols cyberspace remains a very open question—as it has for much of the last 50 years—one that increasingly hinges on the question of data ownership and protection and the lightening-fast growth of generative artificial intelligence.
Anne Neuberger, deputy national security advisor for cyber and emerging technology, reminded the group that, at least when it comes to cybersecurity, the private sector needed to partner more with Federal government to combat foes in an era of rising threats, if only because the bulk of critical infrastructure in the U.S. was in the hands of the private sector. She pointed to the example of a partnership with Google and Microsoft that provides free cybersecurity training for 1,800 hospitals throughout rural America. Efforts like these would need to expand in the age of artificial intelligence.
“We need to ensure that before AI is used across our critical water, our pipelines, our railways, we’ve built in things, protections like transparency on what data they’re trained on, adequate red teaming of the models, keeping a human in the loop on key decisions, ensuring that before operational systems are connected to AI models, we’ve tested it and we’ve ensured we’ve built in guardrails as well,” she said. “So in some ways cybersecurity is really sobering. And social media are sobering as we think about AI regulation and the need to put responsible regulation in place to ensure as a country we can benefit from the massive innovation AI will bring, but not wait to get the controls in place until we’re baking them on afterwards, which is costlier and harder.”
Tom Bossert, the former homeland security advisor to President Donald Trump largely agreed, but, as he pointed out, new and proposed cybersecurity regulations for companies have done little to halt “coordinated nation-state hacking” into U.S. companies. “I already see the compliance costs that are going to grow,” he said, “and I don’t see them translating into greater security results.”
The most skeptical voice in the morning’s session came from longtime Silicon Valley investor Roger McNamee, who has become increasingly concerned about how the technology industry polices itself in the decades since he was among the first investors in Facebook. Over the past year, he’s argued that the unregulated gold rush around generative AI was not only a terrible long-term bet for investors, unsupported by its underlying economics, but could pose dangers to society that potentially dwarf the unforeseen challenges created by the rise of social media.
“My big admonition to all CEOs who are on this call is to pause,” he said. “There is no rush to embrace artificial intelligence. In fact, one might reasonably conclude if you did the analysis that the technology is not actually ready for primetime and applying it into productivity use cases in corporations may in fact lead to perverse outcomes, similar to what we’ve seen with other internet technologies. So I would just encourage everybody to recognize not only is this battle not over, we’re just barely joining it.”
A seasoned negotiator shares tactics for getting the deal you want.
Healthcare packages provided to employees are a massive and continuously rising cost to businesses—and recent…
Presented by Chief Executive and Thayer Leadership, the award recognizes businesses that lead our nation…
Poll of 300 CEOs across Canada finds three recurring themes impeding growth, with near-complete agreement…
In this edition of our Corporate Competitor Podcast, Jim Kavanaugh, the CEO and co-founder of…
Look internally to tap the transformative potential of GenAI in learning and development. (And if…