The CFPB may be facing its most significant legal threat
“I’d show pictures of the Chinese surveillance cameras and talk about their social credit system, and how the government is using technology to control its population. And they’re exporting it to other countries, and so there’s a real competition about what the future is going to look like between government control and not,” Thornberry said while speaking at an event held by the Special Competitive Studies Project, a nongovernmental organization funded by former Google CEO and AI investor Eric Schmidt that advocates for more U.S. AI spending.
A member of the Pentagon’s emerging tech advisory group, the Defense Innovation Board, Thornberry also has championed the use of AI and emerging tech to help the U.S. defend against China and preserve democratic values. By displaying those photos showing China’s AI-fueled surveillance apparatus, Thornberry aimed to illustrate exactly what the U.S. defense department is up against.
“You have to remind people the context, the bigger picture and why it matters,” Thornberry, also a member of SCSP’s board, said.
But as national security fears of China’s AI advancements propel U.S. AI policy, some human rights and AI watchdogs worry investments in AI with military applications will become a major focus, allowing the U.S. to deflect scrutiny or legal guardrails for its own AI practices.
“I’m far more worried about the risks to our society from failing to regulate AI than the risk that we fall behind China in some aspects of the technology,” said Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace.
Renard Bridgewater, a member of New Orleans’ Eye On Surveillance coalition who has advocated against surveillance tech including AI-based technologies there, questioned Thornberry’s use of China surveillance photos.
“It feels vaguely hypocritical, if we’re talking about China in one way, and using that as a motivation of sorts to spend more money on AI here, when metropolitan areas across the country — predominantly Black and brown communities — are negatively and directly impacted by that same technology or similar tech,” Bridgewater said during a Protocol event last week.
In July, the New Orleans City Council voted to reverse its facial recognition prohibition. Photo: Kate Kaye/Protocol
China’s use of AI-based surveillance technologies to monitor and penalize minority Uyghurs is often pointed to by U.S. lawmakers, national security officials, and tech investors as a key justification for blocking China’s access to tech that could advance its surveillance and military AI capabilities, as well as for increasing federal spending on unregulated AI in this country.
Not only is the so-called AI race considered a competition with China for economic or technological superiority, but one of democratic values. Miriam Vogel, co-chair of the White House National AI Advisory Committee, suggested at a POLITICO event in September that democratic values can be baked into U.S. tech like cinnamon and nutmeg in an apple pie.
“AI embeds our culture, and our culture in the U.S. is trust and democratic values,” Vogel said.
Vogel’s remarks mirrored sentiments found in one of the most influential documents guiding U.S. AI policy and investments thus far: the 2021 final report of the National Security Commission on Artificial Intelligence.
I’m far more worried about the risks to our society from failing to regulate AI than the risk that we fall behind China in some aspects of the technology.
“The AI competition is also a values competition,” stated the report. In an effort to stay ahead of China and combat what the report called the “chilling precedent” created by China’s use of “AI as a tool of repression and surveillance,” the commission called on the federal government to double annual non-defense funding for AI research and development to $32 billion per year by 2026.
Today, people including Thornberry and others working at Schmidt’s SCSP have picked up the NSCAI’s mantle in the hopes of influencing federal spending on AI and emerging tech.
Still, the U.S. has yet to pass any federal regulations or laws governing AI development and use, despite an explosion of AI deployment by businesses and government. Letting China’s AI threat distract the U.S. from meaningful AI regulations would be a mistake, Sheehan said.
“We’ve already seen the way technology left to its own devices can widen inequality, deepen social divisions, and exacerbate political extremism. Unchecked AI deployment could put risks like those on steroids in a way that threatens the foundations of our democracy,” he said.
Surveillance in the USA
In September, when China’s Suzhou Keda Technology promoted its “smart community” project involving 2,000 facial recognition-enabled cameras installed in communities in Xinghua, a city about 150 miles north of Shanghai, the company said the system would identify people and vehicles to accurately warn of security risks and improve the level of safety for residents there.
It sounded familiar. When U.S. municipalities and everyday homeowners in the U.S. implement surveillance technology, protecting safety is often a primary reason.
“All I want is a safer city,” said New Orleans city council member Freddie King III in July when he voted for the heavily surveilled city to reverse a facial recognition prohibition, allowing use of the technology by the New Orleans Police Department.
Other cities in the U.S. including Detroit and San Francisco are home to growing publicly and privately owned surveillance camera networks that law enforcement can access. In small towns, AI-based license plate readers and vehicle recognition cameras with police access are being installed by private homeowners associations. There is little accountability or transparency when surveillance tech is deployed by private entities.
There’s also a buildup of AI-enabled surveillance tech in use by U.S. Customs and Border Protection at the southern U.S. border. Earlier this year the U.S. Government Accountability Office warned of the border protection agency’s failure to notify people of its use of facial recognition at U.S. airports.
“The way that the Uyghur people of China are continuously surveilled in such a highly oppressive way, that could readily happen here [in] a slow, creep-like fashion,” Bridgewater said.
In the U.S. Black people and women have been subjected to discriminatory AI systems used in hiring, banking, and health care. Some Black men have been wrongfully arrested because of inaccurate facial recognition in policing software.
And use of other controversial forms of AI that have sparked concern among civil and human rights advocates when deployed in China is growing in the U.S. Emotion AI, which is intended to determine people’s emotional attitudes, has been baked into software sold and used throughout the U.S. by companies including Google and Microsoft. Emotion AI providers in the U.S. have attracted millions of dollars in venture capital funding.
But even though various U.S. agencies including the Department of Defense, intelligence agencies, and the White House Office of Science and Technology Policy have released nonbinding guidance on AI principles and rights, there are no federal AI regulations or laws in the U.S. And the country still has not enacted federal data privacy legislation despite indiscriminate harvesting and use of people’s data to build AI.
[A lot of people] have this notion that AI that’s developed in China somehow embeds a different system of ethics and values that’s uniquely Chinese.
At the same time, China has established new data protections and AI-related regulations. The country established its Personal Information Protection Law in 2021, which some consider to be similar to Europe’s General Data Protection Regulation. That year, China’s Supreme People’s Court ruled to require businesses to obtain consent to use facial recognition. In January, China’s Cyberspace Administration was among the first regulatory bodies to establish rules requiring algorithmic transparency and explainability, allowing people to opt out of algorithmic content targeting.
AI policy watchdogs recognized that China’s regulations serve a dual purpose, allowing the government to censor and shape public discourse. However, they said China’s regulations could have some positive influence on how other governments craft regulations and how corporations implement them.
“These regulations will cause private companies to experiment with transparency and explainability and impact assessments. China can help the global conversations around that because they’re moving from principle to practice,” said Merve Hickok, senior research director and chair of the board for the Center for AI and Digital Policy, a nonprofit AI policy and human rights watchdog.
Sheehan also saw value in China’s AI laws. “The irony here is that Chinese leaders get this,” he said. “They are putting out some of the most concrete regulations on algorithms anywhere in the world, and they’ve spent two years going after monopolies in their tech sector. We obviously shouldn’t try to mimic China’s controls on free speech, but we should recognize that strong regulation doesn’t need to be in opposition to innovation.”
Fighting regulations with AI values assumptions
Schmidt, who has the ear of several high-powered U.S. lawmakers and current and former government officials when it comes to AI policy, has vocally advocated against U.S. AI regulations.
In October, when the White House unveiled a nonbinding “Blueprint for an AI Bill of Rights,” he told The Wall Street Journal that the U.S. should not regulate AI yet because “there are too many things that early regulation may prevent from being discovered.” It’s a stance inspired by a common motto in Silicon Valley: “Move fast, break things.” It’s an approach that Schmidt seems to openly espouse when it comes to AI advancement.
“Why don’t we wait until something bad happens and then we can figure out how to regulate it — otherwise, you’re going to slow everybody down. Trust me, China is not busy stopping things because of regulation. They’re starting new things,” he said during an interview last year.
At the same time, Schmidt and others suggest that AI built in China is ethically flawed. Earlier this year during a panel discussion at the Aspen Institute’s Security Forum, when Schmidt referenced Microsoft software that automatically writes programming code, he implied that it would be inherently nefarious had it been built in China: “Now imagine if all of that was being developed in China and not here. What would it mean?” he said.
Since then, Microsoft has been sued for copyright infringement in relation to that software.
“[A lot of people] have this notion that AI that’s developed in China somehow embeds a different system of ethics and values that’s uniquely Chinese,” said Rebecca Arcesati, an analyst at the Mercator Institute for China Studies.
Listen: Kate Kaye talks with Rebecca Arcesati about the pros and cons of China’s potentially influential AI regulations.
“I fear that sometimes we may risk falling into this Orientalist trap, seeing China as this alien place where things are just different from what we are used to in the West,” Arcesati said.
There’s little indication that a technology’s country of origin automatically instills values — particularly AI technologies that are commonly constructed from borderless, open-source components. For instance, computer vision AI researchers from the U.S. and around the world have resisted requests to consider fairness or prevent discrimination in their work, despite the fact that some of it can be used to build controversial systems such as facial recognition and surveillance tech, deepfake videos, and AI that is meant to detect people’s emotions.
When chairs of one of the world’s most important computer vision AI conferences, held this year in New Orleans, tried to make minor ethics-related changes to research reviews, they were met with resistance from researchers, including some from the U.S. who told Protocol that requiring ethical reviews would hamper their independence and is “not their job.”
It’s very easy to use that approach to deflect any responsibility and accountability for the [things] that other countries are doing, and use this AI race framing for more funding into military or surveillance technologies.
Abigail Coplin, an assistant professor of sociology and science, technology, and society at Vassar College who studies research and development in the AI-enhanced realms of biotech and agro-biotechnology in China, agreed. “There’s a very prevalent discourse right now, definitely in political circles, [about] whether values are intrinsically baked into technologies. I would say I’m a little bit skeptical of that,” Coplin said.
“It’s easy to criticize China or some of the other autocratic governments and shield the U.S. and other democratic countries from criticism,” Hickok said. “Some of it is legitimate criticism, but it’s very easy to use that approach to deflect any responsibility and accountability for the [things] that other countries are doing, and use this AI race framing for more funding into military or surveillance technologies which then find their way into experiments in domestic law enforcement or migration management,” she said.
Ultimately, by planning AI strategy and investment through a national security lens, the U.S. could drown out important efforts such as drug discovery, development of climate change-related technologies, and global AI standards that could benefit from collaboration with China, Arcesati said.
“At the time when this rhetoric of an AI arms race is really crowding out other conversations, global links with Chinese academia and Chinese researchers are fundamental and should be strengthened even further,” Arcesati said. “While countering and pushing back against China’s use of AI in ways incompatible with international human rights law and norms, democracies like the U.S. will also have to find ways not to shut the door on cooperation completely.”