Home Virto Commerce blog Proposed AI Legislation in the EU & Its Effect on B2B eCommerce
Proposed AI Legislation in the EU & Its Effect on B2B eCommerce

Proposed AI Legislation in the EU & Its Effect on B2B eCommerce

The increasingly towering concerns over the use of AI have prompted many countries to grope toward basic standards of AI regulation. The proposed EU rules, which are deemed to change the course of the AI debate, aim at identifying and managing the risk of using such systems. Who is affected and how to navigate the regulatory compliance risks? Here’s the rundown of all the proposed changes.

Remember the “Nosedive?” It was an episode from Black Mirror that depicted a society where people rated their interactions with each other through a mobile app and, by doing so, cumulatively affected each other’s socioeconomic status. Although more or less comedic, even back in 2016, the story had eerie undertones that didn’t leave even the bleakest hopes for privacy in the future. Now, the scenario doesn’t seem too far-fetched, does it? Think China's Social Credit System, the app Peeple, and similar technology that uses sensitive data to predict human behavior or rate people’s lives. We all have very good reasons to be skeptical of Artificial Intelligence (AI) that lies at the heart of such technology. 

Consider a recent investigation by Wired into the use of NarxCare, a drug monitoring and analytics tool for doctors, pharmacies, and hospitals that instantly identifies a patient’s risk of misusing opioids. NarxCare uses Machine Learning (ML) and Artificial Intelligence (AI) algorithms to mine state drug registries for red flags, which might indicate a patient’s suspicious behavior like ‘drug shopping,’ and automatically assign each patient with a unique, comprehensive Overdose Risk Score. The software looks at the number of pharmacies the patient visits, the distances they travel to get medication or receive healthcare, and the combination of prescriptions they use. 

While the idea behind the tool seems brilliant – after all, the US government has spent years and millions of dollars trying to contain the opioid crisis and the number of prescribed controlled substances – the implementation is far from perfect. The problem is that NarxCare, which boasts of wide-ranging access to patients’ sensitive data, uses a proprietary mechanism, meaning there’s no way to look under its hood to inspect its data for errors and biases that might (even unintentionally) slip into the AI’s mechanism. As a result, many patients, particularly the most vulnerable, have been mistakenly flagged for suspicious behavior and denied healthcare and medication that might have improved their quality of life. 

NarxCare is just one example of algorithmic engines that, despite having benign intentions at the core, can produce erroneous results, thus weaponizing the biases of its underlying data. Even when AI seems to be inherently good, like when it’s used to spot carcinomas, human intervention seems necessary to ensure equality and racial diversity, in terms of representation, and accuracy, in terms of its output. It’s obvious that AI regulation has been a long time in coming.

Overview of the Proposed EU AI Regulation

In April 2021, the EU released a new proposal aimed at the systematic regulation of AI. If enacted, it will forbid the use of high-risk AI systems, regardless of their benefits. At the most basic level, the proposed rules acknowledge that AI presents two types of problems: a threat to people’s physical safety and privacy, and a risk of discrimination. In the proposed regulation, while completely banning the use of some AI systems, the EU insists that companies formally document and demonstrate fair and nondiscriminatory practices that lay at the core of their AI engines. The proposed rules, which are the first comprehensive response to the overpressing need to oversee the development and use of AI, would apply to any AI system used or providing outputs in the EU, suggesting its global influence.

Types of AI Systems Under the Proposed EU Regulation

The proposed regulation divides AI systems into three categories:

  • Unacceptable-risk AI systems that, in turn, fall into three categories, such as: harmful manipulative and exploitative systems, real-time remote biometric identification systems used by law enforcement, and social scoring that evaluates an individual’s trustworthiness;
  • High-risk AI systems that denote any system that evaluates consumers’ creditworthiness, assists in employing or managing employees, or uses biometric identification;
  • Limited- and minimal-risk AI systems that include many other AI applications currently used in a business setting, such as a chatbot, recommendation system, or AI-powered inventory management. 

If an AI system doesn’t fall within any of the above categories but still uses EU data, it won’t be subject to the proposed legislation, but to the General Data Protection Regulation (GDPR) instead.

Although the classification system is still under development, it provides valuable insights into the legislation and a useful framework for companies willing to develop their own internal taxonomies regarding the development and use of AI. However, it’s worth noting that the proposed rules assess the AI systems in terms of the risks they pose to the public and not to the organizations themselves, at least not at this time.

Requirements for Companies Using and Providing AI Systems

Depending on the level of risk, the EU proposes different levels of control for the systems. For example, the systems within an unacceptable-risk category would no longer be permitted in the EU under the proposed legislation, whereas the high-risk systems would be the ones subject to the most requirements. 

The oversight obligations imposed on those developing or using high-risk AI systems would include:

  • Conformity assessments or algorithmic impact assessments that analyze system design, data sets, biases, system-user interactions, and monitoring of system outputs;
  • Assurance assessments that ensure systems are explainable, overseeable, and consistent; and
  • Cyber risk-management practices that take care of AI-specific risks like adversarial attacks.

When it comes to the minimal-risk AI systems, they are subject to significantly fewer requirements. When speaking of this type of AI systems, companies need to disclose any information necessary to make an informed decision regarding the use of such systems.

Surprising Omissions: Shortcomings of the Proposed EU Regulation

Despite its apparent design for the public benefit, the proposed regulation comes with surprising omissions, primarily regarding Big Tech, which emerges virtually untouched under the new AI legislation. That is rather surprising, considering the increasing concern over Big Tech’s widespread use of and research into AI. Moreover, the regulation does not treat the AI used in social networks, search, online retailing, or mobile apps as high-risk, although some of them might use the tracking or recommendation engines that are covertly exploitative. It’s possible, however, that under the proposed regulation and after the assessment by a regulator, those engines are going to be prohibited. 

Regarding the disclosures, the regulations have not been clear-cut so far either. For example, people have to be told when they interact with deep fake technology or when the AI system recognizes their emotions, race, or gender. In other instances, however, the disclosures don’t seem to be necessary, for example, when AI algorithmically sorts people to determine their eligibility for public benefits, education, employment, or credit. 

As of now, the oversight obligations, such as conformity assessments, look more like internal check-offs for companies to go through with no audit reports for the public or regulators to review. Hopefully, these half-baked moments are going to be revisited in the upcoming sessions, and we’ll have some definitive recommendations regarding the use of the technology in question.

Territories Affected by the Upcoming AI Rules

Any AI system that provides output in the EU would be subject to the regulation, regardless of where the provider of the AI system or user is located. Also, individuals or companies located within the EU, selling AI on the EU market or using it within the EU, will be subject to the regulation. 

Penalties for Not Following and Abiding By the Proposed Legislation

The penalties for not abiding by the rules could potentially amount to €30 million or 6% of global revenue, which is far heftier than the fines imposed on violations of GDPR. The largest fines will be levied on the use of prohibited systems and violation of data-governance provisions for the high-risk AI systems. All other violations will be subject to a lower maximum penalty of €20 million or 4% of global revenue, whereas providing regulatory bodies with incorrect information will carry a maximum penalty of €10 million or 2% of global revenue. As with other such rules, the enforcement would be phased out and first concentrate on those that do not even attempt to comply.

Timeline for Adoption and Implementation of the EU AI Regulation

The timeline for adoption and implementation has not been announced and there’s no way to know for sure how long it will take for the member states to agree on some such legislation. However, speaking from experience, it took six years for the GDPR to go all the way from proposal in 2012 to enactment in 2018. Regardless of the timeframe, there have already been provisions aimed at regulating the AI systems in other laws, even if they don’t specifically reference it. For example, the GDPR already requires explicit consent from users before they are subject to decisions based solely on automated processing.

The Situation in Other Countries

The countries of the EU are not the only ones preoccupied with attempts to regulate the AI systems. On the other side of the ocean, in the US, authorities are starting to respond as well. As a result, in April 2021, the United States Federal Trade Commission published a blog post elucidating its authority on pursuing legal action against companies that fail to mitigate the AI bias or engage in harmful practices associated with the use of AI systems. Massachusetts has recently passed a law limiting the use of facial recognition in criminal investigations. And other states are following suit. For example, in California and Washington states, regulators are actively engaging in discussions to regulate public contracts for the provision of AI-based products and services, which has a lot in common with the EU proposal and the Canadian model of Algorithmic Impact Assessment. While it’s highly unlikely that the US will adopt the EU AI regulation verbatim, it can learn from its comprehensive approach and translate it to the US realities. Meanwhile, several organizations, such as the International Organization for Standardization (ISO) and the US National Institute of Standards and Technology (NIST), have already been publishing AI development and deployment standards and pushing them for international adoption.

Steps To Prepare for the Proposed Regulation

Although the regulation is far from being sealed, companies need to brace themselves and take steps to prepare to comply with the rules. The sooner organizations adapt to the requirements, the better. Below we’ll talk about strategies companies can adopt to develop a comprehensive AI risk-management program.

First and foremost, companies should establish a holistic strategy that will address the regulatory requirements, clear reporting structures, and thorough data-privacy and cybersecurity risk-management protocols. 

The program should include provisions for the following procedures:

  • An inventory of all AI systems containing their descriptions with both current and planned-use cases;
  • A risk classification system that is in line with current and potential regulations;
  • Risk mitigation measures, including budgeting for investments in relevant technology that addresses AI-related risks;
  • Independent audits including data audits, data cleaning, and augmentation measures; and
  • Data risk management and evaluation procedures, including reviews of adverse incidents, potential misuses of the AI system, and organization of preventative measures.

Under the proposed regulations, organizations would be required to conduct conformity assessments for high-risk AI systems, which are essentially reviews of each such system regarding their compliance with applicable standards. These assessments should include the following procedures:

  • Documentation of choices made while developing AI, including its limitations and level of accuracy;
  • Evaluation of risks, including those potential and unintended, regarding violations of fundamental rights; and
  • Review of risk mitigation measures, including human oversight.

It’s worth noting that regular assessments and extensive, well-maintained documentation ensure that developers use best practices while developing AI and consistently evaluate the risks associated with using such systems. 

A successful governance system for AI should consist of the following components:

  • A dedicated cross-functional committee (made up of professionals from a variety of functions) that is responsible for ensuring compliance, and
  • Independent audits of AI systems.

B2B Ecommerce Platform and AI in B2B eCommerce

For B2B ecommerce, AI can be an indispensable tool that fuels anything from search engines to chatbots and recommendations. As with any other technology, AI does have its challenges. Besides being expensive, AI systems are complex and require both the necessary data and the right people. Moreover, AI implementations demand a robust and innovative B2B ecommerce platform that is flexible enough to integrate with different third-party systems, including those that are AI-powered. Since B2B ecommerce platforms typically don’t come with pre-installed AI engines, companies need to rely on the platform’s ability to integrate; thus, it’s of utmost importance that business owners look for a platform with a reliable and scalable back-end API. Moreover, the B2B ecommerce platform’s ability to customize is just as important. Typically open-source solutions provide the flexibility to adapt to ever-changing business scenarios and might be better suited to AI integrations than proprietary solutions. 

When it comes to compliance, B2B companies need to ascertain that the AI system they purchase is compliant with the existing and upcoming regulations. One way to do so is to hire outside, independent experts to perform evaluations of AI systems. Needless to say that, while using the AI systems, companies have to keep all the records that capture the requirements for AI systems and demonstrate the organization’s full accountability and compliance.

Conclusion

The proposed EU regulation is yet another addition to an ambitious digital legislative agenda that Brussels has unveiled over the course of the last few years. The rules serve as a reminder for organizations of all sizes to ensure the fair and consistent use of AI technology. We strongly believe that those organizations that embrace the risk management of AI systems early on will inevitably win in the long run, while continuing to innovate and deploy AI safely and with speed.

Request a quick demo

Marina Conquest
Marina Conquest
Marina Vorontsova has been working in IT since 2007, for the past three years as a writer. She covers all-things technology and contributes to business coverage.
Aug 25, 2021 • 5 min
You might also like...
Comparing Virto Commerce: Identify Top 9 eCommerce Alternatives in 2024 Comparing Virto Commerce: Identify Top 9 eCommerce Alternatives in 2024
 Elena Bekker
Elena Bekker
Jan 26, 2024 • 13 min
eCommerce Shopping Cart: Guide & Software Comparison eCommerce Shopping Cart: Guide & Software Comparison
 Oleg Zhuk
Oleg Zhuk
Jan 19, 2024 • 4 min
Mastering B2B eCommerce and B2B Website Personalization Techniques in 2024 Mastering B2B eCommerce and B2B Website Personalization Techniques in 2024
 Mary Gabrielyan
Mary Gabrielyan
Jan 18, 2024 • 4 min
Copyright © 2024. All rights reserved.