Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

“The framework allows for a set of binding requirements for federal agencies to establish safeguards for the use of AI so that we can reap the benefits and allow the public to rely on the services the federal government provides,” says Jason Miller, deputy director of the OMB. for management.

The draft memo highlights certain uses of AI where the technology can harm rights or safety, including health care, housing and law enforcement, all situations where algorithms have in the past resulted in discrimination or denial of services.

Examples of potential safety risks mentioned in the OMB draft include the automation of critical infrastructure like dams and autonomous vehicles like the Cruise robotaxis that were shut down last week in California and are under investigation by federal and state regulators after that a pedestrian was hit by a vehicle. He was dragged 20 feet. Examples of how AI could violate citizens’ rights in the draft memo include predictive policing, AI that can block protected speech, plagiarism or emotion detection software, tenant screening algorithms, and systems that can affect the immigration or child custody.

According to the OMB, federal agencies currently use more than 700 algorithms, although inventories provided by federal agencies are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “Our expectation is that in the coming weeks and months we will improve agencies’ ability to identify and report on their use cases,” he says.

Vice President Kamala Harris mentioned the OMB memo along with other responsible AI initiatives in remarks today at the US Embassy in London, a trip made for the UK AI Security Summit this week. He said that while some voices in AI policymaking focus on catastrophic risks, such as the role AI may one day play in cyberattacks or the creation of biological weapons, AI is already amplifying bias and misinformation. and affecting individuals and communities daily.

Merve Hickok, author of a forthcoming book on AI acquisition policy and president of the nonprofit Center for AI and Digital Policy, welcomes that the OMB memo would require agencies to justify their use of AI and assign specific people responsibility for the technology. That’s a potentially effective way to ensure that AI isn’t included in every government program, says Hickok, who is also a professor at the University of Michigan.

But he fears that providing exemptions could undermine those mechanisms. “I would be concerned if we started to see that exemption widely used by agencies, especially law enforcement, national security and surveillance,” he says. “Once they get the exemption, it can be indefinite.”

We will be happy to hear your thoughts

Leave a reply

Productextravaganza
Logo
Compare items
  • Total (0)
Compare
0