Beginning October 1, it will be illegal for a company to use biometric facial recognition during the process of interviewing job candidates.
Executives might look at that legislation and wonder why it exists. Why would anyone take such a high-stakes legal risk in the first place?
Facial biometrics-based affect recognition has been promoted as a way around bias by some selling it for hiring processes, but critics contend that the technology is scientifically flawed and enforces privilege.
Proponents of artificial intelligence already spend not-insignificant time and resources defending against credible accusations of biased algorithms. Plugging AI into the hiring process, which is always fertile ground for courtroom finger-pointing, would seem to be asking for an unforced error.
Perhaps aimed at the uninitiated, Maryland’s act requires would-be hirers to get an applicant’s consent before even capturing their image. Specifically, House Bill 1202 says companies cannot create a facial template during an interview. Not mentioned are images a firm might record with their surveillance or security-badging cameras.
The legislation, briefly analyzed in The National Law Review, defines a facial template as “the machine-interpretable pattern of facial features that is extracted from one or more images of an individual by a facial recognition service.”
The legal risk of using algorithms found to be biased was discussed (subscription) last month in legal news service Law360. The piece argues that efforts to eradicate all bias in AI is unrealistic, but also not necessary. Written by U.S. Army Brig. Gen. Patrick Huston and litigator-turned-business consultant Lourdes Fuentes-Slater, the article makes the case that executives have to recognize AI’s “propensity to have illegal or harmful impacts due to negative biases.”
Hutson is assistant judge advocate general for military law and operations within the Department of Defense. Fuentes-Slater is founder and CEO of consultancy Karta Legal LLC.
Enacting a reasonable program to mitigate the biases will go a long way in protecting early adopters, the pair wrote.