Ethical Hacking News
The US government's efforts to implement remote identity verification (RiDV) technology have hit a snag due to the revelation of significant bias, inconsistency, and unreliability in five RiDV products tested across various demographic groups. The study's findings have sparked concerns about inequitable treatment of certain groups within the US government's online platforms.
The US government's RiDV technology is plagued by bias, inconsistency, and unreliability. Only two out of five tested products demonstrated equitable performance across different demographics. Three products exhibited significant disparities in error rates and false rejections, particularly for Black participants and individuals with darker skin tones. The GSA will continue to evaluate research on RiDV technologies to assess their effectiveness and inform future efforts. LexisNexis argues that the technology is not inherently flawed but relies too heavily on visual identification. NIST's IAL2 remote identity proofing standard is criticized for its limitations, particularly in light of generative AI and deep fakes. The study highlights the need for greater diversity and representation in the development of RiDV technology to ensure inclusivity and equity.
The United States government's efforts to implement remote identity verification (RiDV) technology, designed to enhance security and efficiency in online transactions, have hit a snag. A recent study conducted by the US General Services Administration (GSA) has revealed that five RiDV products, tested across various demographic groups, are plagued by bias, inconsistency, and unreliability.
The GSA's findings, which were recently made public, show that only two of the RiDV products tested demonstrated equitable performance across different demographics. However, the remaining three products exhibited significant disparities in error rates and false rejections, with one product displaying a disturbingly high rejection rate for Black participants and individuals with darker skin tones.
The study's results have sparked concerns about the potential for inequitable treatment of certain groups within the US government's online platforms. The GSA has acknowledged these concerns, stating that it will continue to evaluate research on the performance of RiDV technologies in order to assess their effectiveness and inform future efforts.
One of the vendors involved in the study, LexisNexis, has taken issue with the findings, arguing that the technology is not inherently flawed but rather, relies too heavily on visual identification. LexisNexis CEO Haywood 'Woody' Talcove emphasized the need for a multi-layered approach to identity verification, incorporating data points such as machine usage patterns, email address validation, and cross-referencing of other records.
Talcove also criticized the National Institute of Standards and Technology's (NIST) IAL2 remote identity proofing standard, which has become a de facto benchmark for RiDV technologies. He argued that NIST has not gone far enough in addressing the limitations of this standard, particularly in light of the growing threat of generative AI and deep fakes.
The implications of the GSA's findings are significant, as they raise questions about the effectiveness of RiDV technology in protecting against identity theft and other forms of cybercrime. The study's results also highlight the need for greater diversity and representation in the development of these technologies, in order to ensure that they are inclusive and equitable.
In response to the study's release, LexisNexis has emphasized its commitment to improving the performance and reliability of RiDV technology. The company has expressed its willingness to collaborate with regulatory agencies and other stakeholders to address the concerns raised by the GSA's findings.
As the US government continues to navigate the complexities of remote identity verification, it is clear that a more comprehensive approach is needed. One that prioritizes the development of equitable and reliable technologies, rather than relying on outdated standards or relying too heavily on visual identification.
In conclusion, the GSA's study serves as a wake-up call for the US government and the tech industry to take a closer look at the limitations and potential biases of remote identity verification technology. It is time for a more nuanced approach, one that incorporates multiple data points and prioritizes inclusivity and equity.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2024/09/30/remote_identity_verification_biased/
https://www.msn.com/en-us/money/other/remote-id-verification-tech-is-often-biased-bungling-and-no-good-on-its-own/ar-AA1rt2cG
https://forums.theregister.com/forum/all/2024/09/30/remote_identity_verification_biased/
Published: Mon Sep 30 09:54:24 2024 by llama3.2 3B Q4_K_M