Search
Close this search box.

B2B insights

EU AI Act: How to ensure compliance and mitigate risks

AI regulations

As B2B marketing leaders navigate the evolving landscape of artificial intelligence (AI) and its integration into marketing strategies, the importance of assessing risks and ensuring compliance with new regulations cannot be overstated. The EU AI Act, among other regulations, sets a framework that B2B marketers must understand and adapt to. 

I spoke with David Smith, AI Sector Specialist, and Paul Griffiths, Data Protection Officer, both from the DPO Centre and Ethan Lewis, CTO, Kochava. Let’s explore how to evaluate risks, ensure compliance and implement effective AI systems responsibly, without undermining the power of B2B marketing.

Transparency is paramount

Understanding the new EU AI Act and its Implications for B2B marketers is crucial. The introduction of the EU AI Act marks a significant development in the regulation of AI technologies. Nonetheless, David says this represents a development rather than a complete overhaul:

“We still need to adhere to the same fundamental principles we always have, such as transparency and having an appropriate legal basis for contacting people. What’s new is that for a subset of technologies within the industry, we must ensure they are used ethically and transparently. While there are certain aspects that may be considered riskier or even prohibited, the core concerns remain similar to what we’ve always dealt with.”

Some companies have anticipated transparency and ethical issues associated with AI and got ready for it in advance, which is the case of Kochava. Ethan mentions that they started preparing an AI maturity framework over a year ago:

“Within that framework, we recognized two main areas of application: the first for our customer base, involving the tools we provide, as outlined in the EU AI Act, and the second for internal use. We adopted a broad approach to ensure we acted responsibly from an implementation standpoint. This included addressing consent and ensuring transparency around AI usage, before the EU AI Act was actually published.”

Understanding the EU AI Act in the Context of GDPR

Despite the challenges associated with the regulations, organizations can leverage their existing GDPR frameworks to align with the requirements. Ethan says it’s important to conduct Data Protection Impact Assessments (DPIAs) whenever new tools, technologies or profiling activities are introduced. The new legislation extends these principles by requiring specific assessments for AI systems, but the fundamental approach remains consistent: the main point is to assess and manage the risks associated with data use.

The EU AI Act introduces additional layers to the existing GDPR framework but doesn’t fundamentally change the process. Ethan suggests organizations must conduct detailed examinations of AI systems’ impacts, similar to the risk assessments already conducted under GDPR.

I think pointing back to the GDPR and CCPA regulations is important, as they impose strict rules on how we can manipulate data. The main aspects of the EU AI Act categorize AI use into four specific categories. The one we focus on most is consumer personalization, especially in relation to ads based on user data. We need to determine whether this falls into a high-risk, low-risk or no-risk category.”

Whether a profile is generated by traditional methods or by an AI model, the key is to evaluate the impact on individuals and ensure compliance with data protection principles. This involves adding specific questions about these systems, such as what data is being inputted, how it’s being processed and stored, and what the potential impact of the AI system’s outputs is. 

Conducting Effective Risk Assessments for AI Systems

Effective risk assessments require a thorough understanding of the system’s functioning and its potential impacts. According to David, organizations must be transparent about the data used to train the AI models, the processes involved in data ingestion and transformation, and the potential outcomes and risks associated with the AI-generated outputs. 

“We need to examine very carefully anything that could be perceived as exploitative or manipulative behavior. Such practices are not only considered high-risk but are actually prohibited under the Act. Determining which groups of individuals to target and constantly updating messages without sufficient human oversight could lead to targeting specific groups by exploiting their sensitivities and fears. This could result in unethical marketing practices.”

David adds that it can become quite easy for categories to emerge that are strongly aligned with particular religions or ethnicities, based on factors such as the times when people are online, their interest in specific products, or their purchases related to cultural celebrations: 

“Even if you claim not to process data about ethnicity, an AI system might inadvertently create categories or bias based on such sensitive information. This is precisely the kind of issue we need to be very vigilant about.”

How to mitigate risks

By conducting detailed risk assessments, organizations can identify and mitigate potential risks, ensuring that AI systems are used responsibly and ethically. David mentions an IBM quote from 1979, which stated that a computer can never be held accountable, therefore must never make a management decision. The point is that it all comes down to responsibility and maintaining human oversight:

“The issue is that if we don’t carefully monitor and establish very narrow and tight guardrails, the system might act in ways that reflect poorly on the company, brand or individual. Therefore, it’s crucial to maintain close oversight of what any system is doing, both from an ethical and a commercial and reputational standpoint.” David Smith, AI Sector Specialist, DPO Centre

He adds that the act will likely reveal further details about its requirements and release additional guidelines, and professional bodies within the market will also create sector-specific guidelines. It’s important to keep an eye on these developments over the coming months. Ethan says Kochava relies on its own in-house capabilities to ensure compliance in the long run:

“Our legal team does a fantastic job of staying up to date with any changes in regulations across the globe. This starts with training the executive team, ensuring they are aware of the evolving landscape and understanding how it impacts our employee base and product. We also rely on our AI maturity framework, which outlines essential processes such as risk assessments, exposure risk communication and go-to-market activities.”

Shift in UK policy backed by industry leaders

The privacy and transparency around AI is becoming more and more important, not only in the EU but across the world, including the UK. The first King’s speech for the new labor government has indicated a shift in the regulatory approach. The new administration plans to implement AI regulations, which is a significant change from the previous administration’s stance of allowing industry self-regulation.

There is a significant push from industry bodies, such as the Data & Marketing Association (DMA), to provide guidance to their members and ensure safe and effective AI usage. Chris Combemale, CEO, DMA, worked with the Government on the inception of data protection reforms:

“The DMA strongly supports the Digital Information and Smart Data Bill. We will work closely with the government to ensure the critical reforms to data protection legislation, that are important to our members, will become part of the new Bill. The DMA also supports proposals for an AI Bill that enshrines an ethical, principles-based approach to AI. The DMA will actively input on development of this Bill at all stages. The combination of a Digital Information and Smart Data Bill and an AI Bill will empower businesses to attract and retain customers, while knowing that they are doing so in a responsible and effective way that builds trust.”

It’s undeniable that AI has already transformed marketing. David mentions that AI-generated content and attempts to target consumers are widespread, especially among smaller organizations with limited budgets.

“It would be naive to suggest that people are not already testing machine learning algorithms to see if they outperform previous methods. I’m sure some of the best algorithms are already delivering superior results, and this trend will only continue. These advancements are becoming increasingly prevalent, regardless of whether people have fully considered their implications.”

Establishing transparent communication and consent mechanisms

Transparency remains a cornerstone of data protection under both GDPR and the EU AI Act. Paul says organizations must clearly communicate how they use data to train AI models:

“Transparency doesn’t change significantly from the GDPR side of things. It means being clear with people about what you are doing with their data and how it is being used. Under the EU AI Act, you must be transparent about how you use data to train AI models and about the data that has been ingested or pushed into an AI model. Transparency is about being open, honest and clear.”

This requires updating privacy notices and statements to reflect AI-specific data usage, ensuring that everyone is fully informed about how their data is being used. Paul recommends that consent mechanisms under the EU AI Act need to align with GDPR standards:

“Most organizations should already have privacy by design processes in place. These processes are essential when using a new tool, adopting new technology, combining data or creating new profiling activities. Any such activities should go through a data protection impact assessment process. The EU AI Act introduces additional requirements for using AI systems, but the fundamentals remain the same. Under GDPR, you must assess the data protection impact of any solution you use. Essentially, AI is just a new tool.”

Organizations must ensure that consent is freely given and explicitly communicated. Maintaining this standard of consent is essential for meeting both GDPR and EU AI Act requirements, ensuring that individuals’ data rights are respected and upheld.

Selecting compliant and ethical AI vendors

When selecting vendors, B2B marketing leaders must ensure that these vendors meet compliance and ethical standards required by the new regulations. Paul advises that organizations should demand detailed explanations from vendors about how their AI systems work, what data is used for training, and any potential risks associated with their use: 

“My argument in this situation is that even if you’re not the owner of the data, you are still responsible for it if you use it. You can’t outsource your compliance to someone else. For example, if you use a data vendor, you’ve essentially taken responsibility for that data. Even if the vendor collected and used it, once you bring it into your system, it’s your responsibility. Under GDPR, if you bring in data from a third party, you’re obliged to inform people how you’ve collected their information within one calendar month.” 

Data ownership implications

Paul adds that if an organization buys data, it owns it and is responsible for it, taking on the role of data controller. When taking data from a third-party vendor, the business needs to verify where the data was obtained, what people were told at the time and whether the data can be lawfully used for its intended purposes.

Ultimately, once the data is acquired, it is the organization’s responsibility to ensure compliance. Vendors should also be able to provide training and documentation to ensure transparency and accountability. 

Nonetheless, it’s important not to rely solely on vendors’ claims but conduct your own assessments and trials. By independently verifying the performance and compliance of AI systems, businesses can make informed decisions and ensure that they are using AI responsibly. Ethan recommends a proactive approach:

“AI is in an explosive phase of innovation, and while we don’t want to hinder that progress, the EU AI Act’s focus on consumer privacy and protecting the end user is crucial. At the end of the day, that’s the primary role of regulations: to safeguard users. My advice to marketers, given this context, is not to shy away from regulations. Embrace them, see them as positive feedback, and integrate them into your organization.”

Conclusion

As B2B marketing leaders face the evolving landscape of AI and its integration into marketing strategies, understanding and complying with the new regulations is paramount. They set a comprehensive framework to ensure the ethical use of AI technologies, requiring businesses to adapt their practices accordingly. By aligning their systems with the Act’s principles, organizations can mitigate risks and enhance their marketing efforts responsibly.

Leveraging existing GDPR frameworks can significantly aid in meeting the new requirements. Conducting thorough Data Protection Impact Assessments (DPIAs) for new AI tools and profiling activities is essential. This approach helps in managing data use risks and aligns AI system evaluations with established GDPR protocols, ensuring consistency and compliance.

Transparency and consent remain critical under both GDPR and the EU AI Act. Organizations must clearly communicate their data usage practices, especially regarding AI model training, and update privacy notices accordingly. Ensuring that consent mechanisms meet GDPR standards reinforces individuals’ data rights, fostering trust and accountability in AI applications.

Selecting ethical and compliant AI vendors is also crucial for B2B marketers. Organizations should demand detailed explanations of AI systems and independently verify their compliance and performance. By taking proactive steps to ensure transparency and accountability, businesses can responsibly harness AI’s potential while adhering to regulatory standards, ultimately safeguarding consumer privacy and building lasting trust.

Join Propolis, our global community of B2B marketing leaders

Related Articles

This website uses cookies to ensure you get the best experience on your website. Read more about cookies policy.