Photography tools

NYC aims to be the first to master AI recruiting tools

Job applicants rarely know when hidden AI tools are rejecting their resumes or analyzing their video interviews. But New Yorkers may soon have more of a say in computers making behind-the-scenes decisions about their careers.

A bill passed by City Council in early November would ban employers from using automated hiring tools unless an annual bias audit can show they won’t discriminate based on race or gender. of a candidate. It would also force the makers of these AI tools to disclose more about how they operate opaquely and give applicants the option to choose an alternative process – such as a human – to review their application.

Proponents liken it to another pioneering New York City rule that became a national standard bearer earlier this century — a rule that required restaurant chains to impose calorie counts on their menu items.

Instead of measuring burger health, however, this metric aims to open a window into the complex algorithms that rank the skills and personalities of job applicants based on how they speak or what they write. More and more employers, from fast-food chains to Wall Street banks, are relying on these tools to speed up recruiting, hiring, and workplace assessments.

“I think this technology is incredibly positive, but it can cause a lot of harm if there’s not more transparency,” said Frida Polli, co-founder and CEO of New York-based startup Pymetrics, which uses AI to assess job skills through games. such as online reviews. His company has pushed for the legislation, which favors companies like Pymetrics that already publish fairness audits.

But some AI experts and digital rights activists worry it doesn’t go far enough to reduce bias, and say it could set a low standard for federal regulators and lawmakers to ponder as they consider ways to curb harmful AI applications that exacerbate inequalities in society.

“The bias check approach is good. The problem is New York City has adopted a very weak and vague standard for what it looks like,” said Alexandra Givens, president of the Center for Democracy & Technology. She said the audits could end up giving AI vendors a “fig leaf” for building risky products with the city’s imprimatur.

Givens said it’s also an issue the proposal only seeks to protect against racial or gender bias, leaving out the harder-to-detect bias against disability or age. She said the bill was recently watered down so that it simply asks employers to adhere to existing requirements under U.S. civil rights laws prohibiting hiring practices that have a disparate impact based on race, ethnicity or gender. The legislation would impose fines on employers or placement agencies of up to $1,500 per violation, though it would be up to the vendors to conduct the audits and show employers that their tools meet the city’s requirements.

The city council voted 38 to 4 to pass the bill on Nov. 10, giving incumbent Mayor Bill De Blasio a month to sign it or veto it or let it go into law without a signature. De Blasio’s office says he supports the bill but hasn’t said whether he will sign it. If passed, it will come into effect in 2023 under Mayor-elect Eric Adams’ administration.

Julia Stoyanovich, an associate professor of computer science who directs New York University’s Center for Responsible AI, said the best parts of the proposal are its disclosure requirements to let people know they’re being evaluated by a computer. and where their data goes.

“It will shed light on the features used by these tools,” she said.

But Stoyanovich said she’s also concerned about the effectiveness of biased audits of high-risk AI tools — a concept that’s also being scrutinized by the White House, federal agencies such as the Equal Opportunity Commission. in employment and legislators in Congress and the European Parliament.

“The onus of these audits is on the tool vendors to show that they comply with a rudimentary set of requirements that are very easy to meet,” she said.

The audits are unlikely to affect internal recruiting tools used by tech giants like Amazon. Several years ago, the company abandoned the use of a resume scanning tool after finding that it favored men for technical roles, in part because it compared job applicants with the company’s predominantly male tech workforce.

There has been little opposition to the bill from AI vendors most commonly used by employers. One, HireVue, a video job interview platform, said in a statement this week that it welcomes legislation that “requires all vendors to meet the high standards that HireVue has stood for ever since. the beginning”.

The Greater New York Chamber of Commerce said employers in the city shouldn’t view the new rules as a burden either.

“It’s about transparency and employers need to know that recruitment firms are using these algorithms and software, and employees need to be aware of that as well,” said Helana Natt, executive director of the chamber.