The US Department of Commerce has called for public comment to help guide policy makers in how to go about regulating AI tools such as chatbots, in an aim to enforce accountability and transparency, reported The Guardian.
Feedback is being asked of groups including researchers, industry groups, and privacy and digital rights bodies, regarding prospective audit and assessment best practices.
During an announcement at the University of Pittsburgh, head of the National Telecommunications and Information Administration (NTIA) Alan Davidson said that regulators “have to move fast” towards implementing privacy rules, in response to rapid innovations.
Additionally, it was revealed that the NTIA is looking at ways to curtail bias, discrimination and other types of unsafe use of generative AI.
The US Government had previously introduced a voluntary “bill of rights” around AI, while the NTIA provided developers with a voluntary risk management framework.
Washington is now looking to turn these guides into compulsory legislation, with releases from major players such as OpenAI and Google being actioned without needing to answer to such rules, but the US has often lagged behind globally when drafting laws for evolving tech.
The EU, for example, is proposing categorisation of risks for its 2021 Artificial Intelligence Act, but has received pushback from stakeholders like Microsoft, which argue that chatbots can’t be categorised due to being used for multiple purposes.
It’s this uncertainty around how to properly govern AI development that Davidson says calls for intervention from public experts.
He added: “Good guardrails implemented carefully can actually promote innovation. They let people know what good innovation looks like, they provide safe spaces to innovate while addressing the very real concerns that we have about harmful consequences.”
>See also: ChatGPT vs GDPR – what AI chatbots mean for data privacy
Trust concerns cited by BBC News chief
A lack of transparency around AI tools has also been called into question by the chief executive of BBC News, Deborah Turness, who has referred to the spreading of fake news as “frightening”.
According to The Times, Turness expressed concerns about the impact of disinformation shared through AI on audience trust at a BBC event late last month, and called for action to repair this trust.
To mitigate sharing of false information, a physical “forensic journalism hub” is said to be planned, housing experts that examine material for accuracy, including fact-checking, and image and video verification — a symbol of an “existential shift”, according to BBC News’s chief executive.
In line with this, Turness suggested allocating airtime to explaining how the BBC goes about its work, which she says “is a small price to pay for growing trust.
“The BBC is considered to be an institution and they are no longer trusted and audiences are moving away. We need to change that relationship.”
Audience trust concerns have also been shared by The Guardian’s head of editorial innovation Chris Moran, who following ChatGPT’s recent citing of non-existent reports crediting the publication said: “The invention of sources is particularly troubling for trusted news organisations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy.”
What ChatGPT means for developers — Will ChatGPT replace software developers, or will it make their jobs more enjoyable by doing the heavy lifting when it comes to writing boilerplate code, debugging and testing? Find out here.
Tony McCandless – The role of generative AI in intelligent automation — Tony McCandless, UK, Ireland and Benelux managing director at SS&C Blue Prism, spoke to Information Age about the keys to intelligent automation leadership, and the current generative AI trend.