The Biden administration is proposing laws for AI systems that would require government audits to ensure they provide trustworthy outputs, which might include assessments of whether AI is pushing “misinformation” and “disinformation.”
Government audits of AI systems are one way to build trust in this new technology, according to Alan Davidson, assistant secretary of communications and information at the Commerce Department’s National Telecommunications and Information Administration (NTIA), who spoke this week at the University of Pittsburgh.
CLICK HERE TO JOIN OUR NEWSLETTERIn other words, the Biden administration wants to turn AI into a woke factory of information. Just like the Biden administration and their boot-licking sycophants in the misinformation fake news media, you would never again be able to trust the information you get from ChatGPT or any other AI. Leave it to the Democrats to politicize AI.
“Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy,” he said in a prepared statement. “Policy was necessary to make that happen in the finance sector, and it may be necessary for AI.”
“Our initiative will help build an ecosystem of AI audits, assessments, and other mechanisms to help assure businesses and the public that AI systems can be trusted,” he added. In reality, it would take away any trust whatsoever. Liberals may be too stupid to question any results coming from an AI but most people would do just fine. But Democrats like a society where the people living in it are “protected” by the government from the cradle to the grave. And that’s why society is collapsing.
Audits, according to Davidson, would assist authorities in determining if AI systems function as stated and respect privacy, as well as whether they result in “discriminatory outcomes or reflect unacceptable levels of bias.”
VISIT OUR YOUTUBE CHANNELAudits might also be used to see if AI systems “promote misinformation, disinformation, or other misleading content,” he said.
The goal of controlling AI-generated disinformation is expected to face federal authorities with both technological and political obstacles. Government officials are only now considering how to govern AI; for example, the NTIA just this week announced that it was soliciting public comment on how it should approach AI legislation.
And the topic of misinformation and disinformation has been a source of contention during the Biden administration, with Republicans and Democrats unable to agree on a definition. Last year, the Department of Homeland Security established a “Disinformation Governance Board,” but it was swiftly disbanded after receiving harsh criticism from Republicans concerned that the body would just criticize views and political viewpoints opposing the Biden administration.
As tough as the effort may be, concerns about biased AI have grown in the last year as technology progresses, and this could be a fertile area for the government to regulate. Elon Musk, CEO of Tesla and SpaceX, said last year that he was concerned that AI used in systems such as ChatGPT was being built to prevent any output that would offend anyone.
This, according to Musk, is a recipe for a “woke” AI. “The danger of training AI to be woke – in other words, to lie – is deadly,” he tweeted in December.
Others, such as NTIA’s Davidson, are concerned about prejudice in AI, which could lead to unfavorable outcomes for minorities.
“Hidden biases in mortgage-approval algorithms have led to higher denial rates for communities of color,” he explained in Pittsburgh. “Algorithmic hiring tools screening for ‘personality traits’ have been found to be non-compliant with the Americans with Disabilities Act.”
So far, the Biden administration has issued basic recommendations for AI development, such as an AI Bill of Rights and a voluntary risk management framework released by the National Institute of Standards and Technology. Other government agencies are also dipping their toes in the water; for example, the Federal Trade Commission has cautioned AI companies that it will be watching to ensure that customers aren’t being sold on fictitious benefits that AI cannot yet provide.
According to Davidson, regulation suggestions are arising quickly because “AI technologies are moving very fast.”
“The Biden administration supports the advancement of trustworthy AI,” he explained. “We are devoting a lot of energy to that goal across government.”
I think a better answer would be to make AI more open to prompts. If users want a woke response, allow prompts to do that. But forcing a government’s political ideology onto an AI service, regardless of the ideology, will be a disaster for the new technology.