Future Tech

EU AI Act still in infancy, but those with 'intelligent' HR apps better watch out

Tan KW
Publish date: Wed, 31 Jul 2024, 10:19 PM
Tan KW
0 461,061
Future Tech

As the world's first legislation specifically targeting AI comes into law on Thursday, developers of the technology, those integrating it into their software products, and those deploying it are trying to figure out what it means and how they need to respond.

The stakes are high. McKinsey says 70 percent of companies will deploy some sort of AI tech by 2030, producing a global economic impact of around $13 trillion in the same period, increasing global GDP by about 1.2 percent annually.

The EU has good intentions, but customers of ours are getting two messages: You've got to have AI to be competitive, but if you do the wrong thing in AI, you could be fined, which effectively would mean the entirely senior management team would be fired, and the business may even go under...

Some say that by introducing the AI Act as it stands, the European Union, the world's richest economic and political bloc, risks missing out on that bounty, while others say that the new rules do not go far enough in protecting citizens from the nefarious impact of AI.

Nonetheless, the one thing commentators do agree on is, at this stage, we don't know much about how the legislation will be implemented, even if we know what it says and that it might result in fines of up to 7 percent of global revenue.

For example, the EU's European AI Office, which will play a key role in implementing the AI Act, is yet to be staffed. The AI Board Guidance is yet to be published, and we are a long way from any legal precedent in case law, especially because the introduction of law will be staggered according to type of AI and the kind of application. The advisory forum, which the Act promises, is yet to be established. Meanwhile, each member state within the EU is set to have its own AI authority to monitor implementation.

Nils Rauer, Pinsent Masons partner and joint lead of its global AI team, said: "This is all a work in progress to set up those authorities and make them familiar with the enforcement authorities that will be in place. It's very, very much a newborn."

The European Commission, the executive branch of the EU, first proposed the AI Act in 2021, but the introduction of ChatGPT seemed to sharpen the focus and urgency around its introduction (see our timeline box below).

The first group of activities set to comply with the law are those banned by it. From the beginning of February next year, prohibited activities will include biometric categorization systems that claim to sort people into groups based on politics, religion, sexual orientation, and race. The untargeted scraping of facial images from the internet or CCTV, emotion recognition in the workplace and educational institutions, and social scoring based on behavior or personal characteristics are also included on the banned list.

General-purpose AI, which was shoehorned into the law at the last minute to ensure it covers generative AI models such as ChatGPT and others from OpenAI, will come under the law this time next year, meaning developers will need to evaluate models, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on their energy efficiency.

The next category that will need to comply is AI systems deemed high risk. From August 2026, systems with the potential to cause significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law will need to comply. Examples might include uses of AI in critical infrastructure, education and vocational training, employment, and essential private and public services. Developers of business software, and those deploying it, might be most concerned about appearance of employment within this category, which means HR software is likely to be caught.

Over the last year, tech industry vendors have launched a flurry of products promising to embed AI in their HR applications. Oracle, Workday, SAP, and ServiceNow are among the pack. SAP, for example, promises "intelligent HR self-service capabilities," while ServiceNow has introduced technology in which LLMs can produce summaries of HR case reports.

Rauer said the big tech companies had already begun to prepare for the Act a long time ago, but smaller and less tech-savvy market players are just beginning. "They are now in a rush, it will be quite a challenge for them to become compliant within the next 24 months."

Rauer said it was not so much a technical burden to comply with the law than an administrative burden.

"You need to document what you did in terms of [AI model] training. You need to document to some extent how the processing works and … for instance in an HR surrounding, on what basis the decision is taken by the AI to recommend candidate A instead of candidate B. That transparency obligation is new."

This created a new dilemma for technology providers who had previously tried to keep their methods a closely held secret. "You need a balance between what you document and make transparent for the authority and what is still trade secret. This is the main task that many of the IT providers do right now," Rauer said.

Meanwhile, users were faced with making strategic technology investments without fully understanding the work they will need to do to comply with the law.

One of the reasons for the continued uncertainty is that although the law is in place, guidance and case law is required to more fully understand how it might be applied, as was the case with the EU's General Data Protection Regulation (GDPR), Rauer said.

Jesper Schleimann, SAP's AI officer for EMEA, said the German software giant had followed the entire legislative process of the AI Act and put together a cross-functional team to analyze the new rules and identify where it might need to respond. "We have proactively established a comprehensive classification process which will safeguard the fulfilment of these requirements," he said.

Schleimann said for the SAP solutions deemed high risk, the company would work towards ensuring the necessary compliance is in place. "While the AI Act has been a more recent legislation, SAP has been executing in line with its AI ethics principles since 2018 and our external AI Ethics councils ensure ethical guidance and compliance.

"It is important to note that the interpretation of the AI Act has just begun. Secondary legislation from Brussels and the ongoing standardization process will shape its impact. SAP is in regular contact with the AI office in Brussels and also national authorities to ensure an exchange on compliance matters. We also work through key associations such as Digitaleurope, Bitkom, and BDI, which automatically means an exchange with a wider community."

Workday's Jens-Henrik Jeppesen, senior director of corporate affairs for EMEA and APJ, said the company was an advocate for responsible AI around the world and supported the intent of the AI Act.

"By enacting smart, risk-based, and proportionate regulation, we can establish trust and mitigate the risk of potential harm, while fostering innovation and bolstering business performance," he said. The company had established a Responsible AI (RAI) program led by the chief legal officer and including the chief compliance officer, chief technology officer, and chief diversity officer.

"Our Responsible AI framework is based on the EU AI Act requirements and the AI Risk Management Framework from the US National Institute of Standards and Technology (NIST). With our compliance programs, we are confident that should any of our products fall within the 'high-risk' category, we will meet the requirements of the regulation," Jeppesen said in a statement.

"Workday will continue to closely follow the development of secondary legislation, guidance and technical standards to ensure that our technology is developed and deployed in alignment with the requirements of the EU AI Act."

ServiceNow, meanwhile, declined the opportunity to comment.

Tanguy Van Overstraeten, partner for IT, data, digital, and cyber at global law firm Linklaters, pointed out that developers and businesses deploying software had different obligations under the law.

"In the HR context, for recruitment purposes, or for the promotion of people, for example, when you use AI technology, this will trigger a 'high risk' definition and there it is very important to understand which role you have. If you are a provider, you have the maximum obligations; if you are a deployer, there remain a number of obligations that can be quite burdensome, and so you have to be careful," he said.

However, companies that buy and modify AI technology could also be caught in the "provider" category and might, therefore, have more burdensome obligations than a deployer.

Overstraeten also warned that companies using general-purpose products in HR tasks might be caught in the high-risk category unintentionally. "This could be the case when employees are using tools such as ChatGPT for recruitment purposes, for example, without management's knowledge. How do you monitor that? How do you make sure that the people in the organization know that there is a limit and that you should be careful not to use these tools in the wrong manner? So, training is very important too," he said.

While providers and deployers will not have to comply until August 2026, they should not think they have plenty of time, he said.

"There's a lot to be done. There's a long list of obligations in the Act that are quite burdensome, they have to be on the register, which does not exist yet, but it will by the time the law is applied. I would say start now, and don't do it under pressure at the last minute."

The text of the AI Act was published in the Official Journal of the European Union on July 12. While the letter of the law is now decided, much of how it will be implemented remains unclear. But don't let that be an excuse for inaction.

Too far or not far enough: Will the AI Act strangle tech innovation in Europe, or fail to fully protect it citizens?

Since its inception, the EU's AI Act has provoked concern among the tech industry and its advocates who fear that it will stifle innovation in the sector in one of the world's richest economies.

Last year, Meta's chief AI scientist, Yann LeCun, said regulating foundation models was effectively regulating research and development. "There is absolutely no reason for it, except for highly speculative and improbable scenarios. Regulating products is fine. But [regulating] R&D is ridiculous."

Speaking to The Register, John Bates, CEO of document management company SER Group, said he did not think there was wide awareness among customers of how the AI Act might apply to the way they implement software.

"It's very ironic that you've got governments investing in research to try and make the EU strong in AI - and the use AI - and on the other side, the same organization is basically scuppering the potentially the use of AI without meaning to.

"The EU has good intentions, but customers of ours are getting two messages: You've got to have AI to be competitive, but if you do the wrong thing in AI, you could be fined, which effectively would mean the entirely senior management team would be fired, and the business may even go under. This is one of the worst things I've ever seen. [AI] is probably as important, if not more so, than the industrial revolution, than the internet."

He said that while the AI Act was "coming from a good place," it was put together "by people who don't really understand computer science and how anybody can predict the way something will be used."

Meanwhile, organizations concerned with defending civil liberties in Europe are arguing that the legislation does not go far enough in protecting citizens from the risks AI might introduce.

"Innovation is great when it's done in compliance with human rights, when it puts people first, [but] we actively don't want innovation at all costs. That's part of European values and European industry," said Ella Jakubowska, head of Policy, European Digital Rights (EDRi), an association of civil and human rights organizations from across Europe.

One of the main problems is that the Act is not a human rights law, but a risk-based law, she said. The result is a law that presumes there are pre-defined use cases in which AI can be a risk.

"Our perspective is that as these systems are used more and more in our daily lives, they can become risky, no matter really what that context is, and especially when we're talking about touch points with the state, whether that's policing, whether that's welfare and benefits, education, these can all be potentially risky areas."

Although the AI Act did not present the "gold standard" for international legislation campaign groups had hoped for, there were some meaningful aspects of it that could be built on, and avenues to contest the parts where the Act did not go far enough, Jakubowska said.

She argued that GDPR had shown a strong privacy tech industry, for example, can grow around legislation.

"Prior to the AI Act, we've seen pilots of AI technology in Europe that have been incredibly dystopian and manifestly not compatible with EU laws. It's right that we should say there is a type of innovation that's reckless, that we don't want, and then there's a type of innovation that we do want and that we can and should foster," Jakubowska said. ®

 

https://www.theregister.com//2024/07/31/eu_ai_act/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment