Piecing Together the Global AI Patchwork – A Practical Guide to Understanding and Implementing New AI-Focused Laws in the EU, UK, and U.S.
Transcript
Hello, everyone. Welcome to our webinar on piecing together the global AI patchwork.
Delighted to be collaborating today with the team at Wiley and bringing you a practical guide to understanding and implementing a lot of the the new AI focused laws that we are seeing across the EU, UK, and US.
My name is Emily James. I'm a partner at Simmons and Simmons based in San Francisco. I head up our office here, not advising on US law, but providing in time zone support for companies who are expanding and, operating in new markets around the world.
And I will hand over to each of the team to introduce themselves.
Hi, folks. I'm Manesh Tanner. I'm a partner at Simmons and Simmons. I also head the AI group at the firm. I'm based in London, and I've spent the best part of the last decade advising on the intersection of AI law and regulation.
Hi, everyone. I'm Kat Scott. I'm a partner at Wiley based here in Washington, DC.
I am part of our telecom media and technology group, and we help advise companies across all sectors on emerging technologies.
Hi, everyone. My name is Lauren Lerman. I am an associate at Wiley, also in the telecom media and technology group and, work a lot with Cat Scott and our our wonderful team on advising on these types of issues.
Fantastic.
And before I just quickly run through the, agenda for today, I wanted to, just mention a few housekeeping points.
So at the bottom of your screen, you will see, a toolbar with some colorful buttons.
And clicking on these will kind of make different panels in the screen appear and minimize.
All of the panels are resizable and movable, so if you wanna move them around, feel free. You can do that to get the most out of today's session.
You can also expand your the slide area will maximize it to full screen by clicking on the arrows in the top right corner.
If you have any questions during the webinar, you can submit them anytime via the q and a panel. We'll have a look at those and, they won't be seen by other audience members. We'll we'll be doing our best to pick up those questions, and we'll have some time at the end to do that as well. But if we can't get to them all, we'll we'd be happy to follow-up afterwards.
And finally, an on demand version of this webinar will be available about two hours after the session and can be accessed using the same website link that was sent to you earlier.
Okay. So what are we gonna cover off today? So we would like to start with providing, an overview of the legal and regulatory developments. It's been a busy time in the field of AI, so we're gonna start by setting out some of those recent, and important developments for you. But what we'd really like to focus on during this session is really about how practically how companies can be thinking about navigating the AI regulatory patchwork. It's complex. It's evolving.
What practically should companies be doing?
And then finally, we'll end with some top trends to watch, things that we are seeing on the horizon, which hopefully will be, useful as a takeaway as well.
So we got a lot of content, so let's get into it. And, I'll start by asking Manesh to talk us through what's happening in the EU.
Thanks, Emily. You've probably all heard by now of the EU AI Act. The EU loves to regulate tech. AI is no exception.
The EU is taking the regulation of AI very seriously.
The EU AI Act came into force on the first of August two thousand twenty four, but it has a staggered application.
The pyramid on this slide is the way to understand the act.
There are certain uses of AI that are prohibited entirely.
There is then a category of AI similar to how we see in the Colorado Act, which will come on to later, which are designated as high risk AI.
There are then certain forms of AI technologies called general purpose AI models, and that is foundation models, LLMs, and and so forth.
And then there are transparency obligations for a limited set of AI systems. It's important to note that not all AI is regulated under the act even if it falls in scope. European Commission even said last week that they anticipate that over eighty percent of AI systems, whilst in scope of the act, won't actually be regulated under one of those top four categories.
That said, we expect this act to have a similar impact as the GDPR.
And part of the reason for that is it's extraterritorial.
So it can apply to to US companies if, for example, they supply AI systems into the EU or slightly cryptic language, if the output from any AI system is used in the EU. We're awaiting guidance on exactly what that means. But for present purposes, this is going to have a global impact, and it has high fines for noncompliance, higher than the GDPR in some some cases, the prohibitions on certain AI uses, which actually came into effect just two days ago, they attract fines of up to seven percent of global annual turnover.
So this is a serious piece of regulation. As I say, the provisions on banned AI have come into force already. The next big milestone is August twenty twenty five for general purpose AI models, and then the big one, August twenty twenty six for high risk AI. And you'll see there we have noted the definition of AI systems. It is relatively narrow but can still capture a number of different AI technologies.
We're distinction drawn between providers of AI systems and deployers of AI systems.
Broadly, that's developers versus users. Many of your organizations might fall into the users category. The good news is the regulatory burden is slightly better for users, deployers, than it is for providers, but there are still obligations across those four categories for both providers and deployers.
Lots more information on our website about the EU AI Act if you're interested. Important point at the bottom, general regulation like the GDPR can still apply to AI. We just have yet another regulation in the European Union targeting AI.
Emily, do you want to talk about the UK position?
Yeah. Absolutely. So, I guess the I think the main thing to, keep in mind here is that the the UK is taking a sort of contrasting approach in terms of its, position. And we've seen that change, in fact, over the last year.
So, last year, it was very much, the UK government was very much looking at kind of sectoral approach.
No, like, big general regulation covering AI such as the in the same way that the EU has done and really relying on, sector regulators to, govern and implement issues around AI.
We've also seen various initiatives, but some bills have gone through parliament. There's some moving through right now, which are identifying and looking at some quite specific issues.
But I think what's very interesting, to see and is that in the last few weeks, the, UK prime minister has Keir Starmer has, announced a a very, an AI opportunities action plan and, which has a number of recommendations that, the government has agreed to implement.
And it's a a really very much a pro innovation approach, so less focus on, like, a a all encompassing regulation and more about specific initiatives, and in particular, focusing on investment, infrastructure, looking at data that the government has to help, make government agencies more efficient, and use of, data and AI in that context. There's also a very specific, consultation happening right now as well around, generative AI and training models, using, data which is, cop copyrighted, and we'll wait to see the outcome of that. There's, a lot of debate, of course, around balancing, the sort of need to, promote innovation with, the need, obviously, to protect, creators of, of of content and so forth. So definitely, a lot to watch this year in relation to the UK, and we'll we'll see how the UK eventually balances its sort of pro innovation approach with its kind of, desire also to to kind of protect, rights holders and also consumers.
So on to the US where it is equally being, extremely busy.
Thank you, Emily.
So, yeah, next up, we wanna talk about the legal and regulatory landscape in the US for AI.
The U. S. Does not have a comprehensive federal AI governance law like the EU AI Act but that does not mean that AI in the U. S. Is not regulated.
Quite to the contrary, AI is subject to a number of fragmented and overlapping laws, regulations, and other approaches at both the federal level and the state level, making the US landscape for AI quite similar to the landscape for privacy.
And that's complex and and a patchwork.
So to help get our heads around the patchwork we tend to like to think about US approaches falling into one of two main categories.
So the first main bucket is generally applicable laws and regulations that apply to AI. And the second would be AI specific approaches. And in the US at both the federal and the state level, we have a smattering of both of those buckets.
So for the general, applicable laws and approaches, those are exactly how they sound.
Right? They are technology agnostic, and they apply to AI just as they would to any other technology similar to what Manesh was saying in the in the EU as well.
In the US, the key generally applicable laws, for AI stakeholders to be aware of, include consumer protection laws, privacy laws, and anti discrimination laws. For example, the FTC, FTC, which regularly enforces the FTC Act, and that prohibits unfair and deceptive acts and practices.
That general consumer protection law applies to AI, just as it would to any other technology. And in fact the FTC, back in twenty twenty three together with a set of other federal agencies issued a statement to make that clear, saying that there's no AI exception, for the laws that are currently on the books and that the FTC plans to vigorously enforce the law to combat unfair and deceptive practices or unfair methods of competition.
So certainly the generally applicable laws apply to AI and our federal agencies have been actively enforcing them.
With that, I'll pass it over to Lauren to talk about the AI specific approaches.
Thanks so much, Kat.
So in addition to the generally applicable laws that Kate Kat has outlined, the US federal government has begun focusing more directly on AI in recent years.
One of the most notable and earliest AI approaches came out of the National Institute of Standards and Technology or NIST in two thousand twenty three.
NIST is a non regulatory US federal agency, and they, among other things, they develop voluntary standards and guidance for emerging technological issues such as cybersecurity and artificial intelligence.
The NIST artificial intelligence risk management framework or AIRMF, as we'll be referring to it throughout the presentation, is a voluntary risk management resource for organizations that are designing, developing, deploying, or even just using AI systems.
The AI RMF aims to provide this voluntary guidance in risk management practices, for organizations.
But this is just one piece of the puzzle.
Much of the action taken across the federal government, the past few years has been initiated by executive orders issued by the president.
An executive order is a signed, written, and published directive from the president of the United States that manages the operations of the federal government.
In twenty twenty three, the Biden AI executive order was the cornerstone of the previous administration's approach to AI policy.
It focused on increasing the U. S. Role globally in AI, and it directed federal agencies to develop standards for for internal use of AI. And I've encouraged agencies to also use their existing authority to regulate, across industries, or regulate the industry use of AI.
By the end of last year, hundreds of directives were completed by numerous agencies across the federal government, as required by Biden AI executive order. And among these directives were two office of management and budget memoranda that prescribed requirements and guidance on the responsible acquisition of AI in the federal government.
But when president Trump took office last month, he immediately revoked Biden's AI executive order and shortly after announced a new executive order.
This this new executive order that he put out continues to focus on US competition, and it declares that US policy will, quote, sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security, end quote. This executive order also launches a review of the agency actions that were developed and, issued under the twenty twenty three Biden AI executive order, and it will suspend, revise, or rescind any agency action that is inconsistent with that policy statement in the twenty twenty five Trump AI executive order.
So while the direction of AI in the US under executive orders has generally been previewed by this new administration, there's a lot yet to be determined pending the review that the Trump administration is currently undertaking.
Moving on to another piece of the federal puzzle, agency enforcement actions are also beginning to impact how AI is viewed and used.
Specifically, as Kat mentioned previously, the FTC has general authority to bring investigations and lawsuits against companies that engage in unfair and deceptive acts and practices. And under that authority, the FTC has looked at the use of AI and has brought several enforcement actions that allege that certain uses of AI can be deceptive or can be an instrumentality for unfair or deceptive acts and practices.
Some of these federal agencies also include, more industry or use case specific AI rules, rulemakings that are being proposed. So at the FCC, for example, the Federal Communications Commission, there were several rulemakings proposed in the last couple years related to the use of AI in the texting and robo calling space and the use of AI in political ads. And we'll see if the current administration continues these and similar efforts.
Now on the state side, we've seen a tremendous uptick in legislative activity around AI. Some of the data shows that forty five of the fifty states considered AI legislation within the last year, and more than seven hundred AI related proposals were considered across state legislatures.
There are a wide variety of laws that states are considering and adopting from comprehensive, like Colorado one that we'll discuss momentarily, to use case specific to legislation that increases funding for AI. So there's a lot of different flavors of legislation being proposed. And here we provide some examples of the approaches that have been adopted so far to, again, preview the different layers and the fragmentation of the state AI landscape.
So as I mentioned, the Colorado AI Act, this law was passed in May of twenty twenty four, and it goes into effect February next year, twenty twenty six. This law is most like the approach that we're seeing in the EU, which we'll discuss in a little bit more detail later, kind of comparing the two frameworks.
But at a high level, the law establishes a range of detailed obligations for developers and deployers of high risk artificial intelligence systems. The law places a duty of reasonable care on developers and deployers of high risk AI systems to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from high risk AI systems.
Both developers and employers can rely on a rebuttable presumption, if they engage in certain including but not limited to risk management policies and consumer notices.
This law also, establishes certain disclosures related to use of, to consumers interacting with, AI systems even if they're not high risk AI systems.
So that's just one preview of the many laws that are being passed and considered across the US.
On this slide, we also, briefly mentioned some more specific ones that deal with, you know, specific disclosure, transparency requirements, deep fakes, use of AI in employment.
And that's just kind of, again, giving this overview and flavor of all of the, different types of laws that we're seeing at the state level. With that, Kat, I'll pass it back to you.
Thank you, Lauren.
Okay.
So, like Emily said, one thing that we want to focus on for this webinar is really talking through how companies can navigate this complex, framework. And so now that we've discussed the legal and regulatory landscapes at a high level across the EU and UK and US, we do want to turn to that question of how can companies navigate this patchwork.
The earlier discussion you know illustrated well that the landscape is complex.
But one element of the patchwork that is helpful actually is that there is a consensus forming around what it means for AI to be trustworthy or responsible.
These principles which we have laid out here on the slide can be viewed as elements of what make AI trustworthy. Or on the flip side they can be viewed as the underlying risk of AI. So for example trustworthy AI would avoid harmful bias, whereas algorithmic discrimination is a risk of AI that is not managed appropriately.
So it's important to note as we're talking about these principles and there being consensus principles that not every framework or approach uses the exact same wording or the exact same terminology.
But we are seeing common threads and these represent those common threads that we see addressing the same concerns driving towards similar frameworks and approaches across various models.
And importantly tying it back to the discussion that we were just having, we see these principles showing up a lot and driving emerging laws and regulations. So they are an important starting point for organizations building an AI governance or compliance program.
So with that, I will, kick it back to Emily and Manesh to discuss some of these principles.
Thanks, Kat. I'll kick off with the first couple on the left hand side. So starting with explainability, that's a principle that's similar to transparency, but it's slightly different. And and I should say there's not universal consensus on exactly what these principles mean, but I think we are, as Kat said, reaching a landing. And explainability, I think, is more about rather than transparency of use of AI or, how the, AI system is being used, it's about the ability to explain how an AI system arrives at a particular output or a decision, and it's particularly intended to mitigate black box effects within AI systems.
Now it's not straightforward. There's no consensus on what you need to say, how much you need to disclose. I think regulators appreciate that you don't have to disclose proprietary elements of your AI system, what level of detail you need to disclose, for example, and the kind of style and and approach of the content in terms of what you're explaining.
But we are increasingly seeing a move towards explainability requirements. There is already one in the GDPR, interestingly.
And now in the AI Act in article eighty six is an express obligation to explain decisions taken by AI in the context of high risk AI systems. So that is, as Cat rightly said, feeding into regulatory requirements.
Security and safety have sometimes been considered together. I think there's now an appreciation that they are slightly distinct.
Security is is very much related particularly at model level to cybersecurity, robustness, the ability of models to withstand malicious attacks, so the ability to jailbreak a model, to bypass guardrails, and so on. Safety, I think, is more about the propensity of models to cause harm and particularly relevant in a generative AI context where large language models could potentially produce harmful content like CSAM or even unlawful content which is still undesirable, so, not representative content, content which may be toxic. That also falls under the context of safety. Emily, do you want to pick up on privacy?
Yeah. Absolutely. Thanks, Manesh. I I think we, we see this as a theme running through a lot of, the the guidance and the, the the sort of draft laws and, incoming laws.
It's obviously, a key aspect of, of trustworthy AI, you know, the AI that we're all, using and that's being trained, on all of our data is is obviously, I guess, raises a lot of concerns for people and their privacy. And so what we see is, a number of, sort of themes around ensuring that, when you're building out, trustworthy AI or responsible AI that you you do comply with existing law around, around privacy. And, obviously, GDPR is just just one of those, laws that is relevant. But it's also about building trust as well, for users and being again, this kind of links back to the other these other points, you know, being, being transparent about how data's collected and handled, which is ultimately gonna assist in any case in the adoption of AI.
And again, this links back to, security issues and misuse of data. These, systems are are are dealing with huge volumes of of information.
And, of course, that enhances the the risk and the sort of threat landscape from a security perspective. So we see we see these themes somewhat overlapping with other topics on on this slide, but privacy is being, a really important principle, for for all of these, laws and guidance that are coming through.
I think the next principle that we wanna talk about is avoiding bias. So this is a key principle behind any development or deployment of AI, avoiding harmful bias. The risks of discrimination and bias that drive this principle are clear. With AI these risks are complex as there are various avenues through which bias can show up including from biased data sets and input to algorithmic bias. There's also concerns that because of the nature of AI, the technology itself can supercharge and amplify bias. And so there is a lot of work being done to study bias in AI including how it can be identified and how it can be avoided and mitigated and managed.
This includes important work into terminology and studies into how to best mitigate bias.
The risk is one that drives a great deal of policy considerations though too. So at the same time as that studying is going on, policymakers are looking at this risk and developing policy approaches around it. So bias, and the risk of bias was a key driver of the Colorado AI Act. There as Lauren noted that law also known as the anti discrimination in AI law, obligates developers and deployers of high risk artificial intelligence systems to use reasonable care to avoid algorithmic discrimination in high risk AI systems. And we see that model actually driven by similar concerns about risk of discrimination and bias from AI being considered across other states as well.
The next principle is accountability. So accountability really speaks to the question of who is accountable for responsible AI across the multiple participants and stakeholders in the life cycle of AI. Right? At a very general level, the principle of accountability really helps and tries to care for the risk that the responsibility for AI will be tossed around like a hot potato.
So several of the approaches and frameworks that we see from the legal and regulatory framework have taken the approach of tasking different stakeholders with different roles.
For example, in the Colorado AI Act, there are distinct and separate obligations for developers and distinct and separate obligations for deployers.
NIST's AI risk management framework also has some really helpful tools mapping out the various stakeholders in the life cycle of AI from developers and deployers to other stakeholders like end users, affected individuals and communities, standards organizations, civil society organizations, among others.
The last thing I want to note about accountability is that this is a really multi layered principle. It also speaks to the responsibility of AI and how that's allocated even within an organization, not just across the life cycle but within an organization.
And there are various accountability tools that are developing including using risk assessments and audits to assess AI accountability and trustworthiness.
And we've seen some legal frameworks in the U. S. At least pop up around these accountability tools as well. So for example, I'm sure folks have heard about the New York City local law, not even a state law, but it's a local law that requires bias audits for the use of certain AI systems in the employment context.
So we see this principle showing up in regulations and laws as well.
Lauren, do you wanna touch on transparency before we move on?
Absolutely.
So the last key principle and theme that we want to highlight is transparency, which is definitely a driving force for a lot of the legal and regulatory frameworks that we have discussed.
Principle of transparency is multifaceted. It can speak to the question of whether and when the use of AI should be disclosed.
Sometimes we see this show up in laws through notifying consumers that they are interacting with AI via chatbot or some other AI system.
And other times, it's it's disclosure requirements to, between developers of AI and deployers of AI.
Many of the state AI laws included some type of notice and disclosure rule. And, for example, there was a section that we, we, had highlighted on there that includes that lists, different types of generative AI disclosure laws, which there's exist in both Utah and color or in California.
So this principle of transparency, also, it just it also exists throughout the AI life cycle.
And as I mentioned, if there's certain sort of transparency requirements under the Colorado law, where you have to disclose between, an AI developer.
The AI developer has to disclose certain information about their AI system, before a deployer is using that AI system.
So now that we've walked through the various responsible AI principles and how they're interconnected and threaded throughout this growing patchwork of legal and regulatory approaches to AI, obvious question is how do organizations put these principles into practice? And one very helpful tool is this NIST AI RMF that we've discussed throughout today's presentation.
And the NIST AI RMF actually has its own seven principles for trustworthy AI that's that are very closely related to the themes that we just talked about.
So this voluntary framework is practical and useful for developing governance and risk management program, And it's particularly useful because it's not prescriptive. Instead, it's flexible and outlines a process and a variety of different entities can use this and modify it to fit their needs rather than being a compliance checklist or a one size fits all solution, which is just not feasible given the broad and diverse AI ecosystem.
And along with this AI RMF, NIST released a playbook, which includes more specific recommendations and guidance for incorporating the AI RMF guidance.
And, Kat, I will turn it back over to you.
Great. Sorry. I was trying to find my mute button. Thank you. So another key question in addition, right, to how does an organization put these principles into practice is how does an organization comply with so many different legal and regulatory approaches?
And this is a persistent issue for companies navigating an increasingly complex legal and regulatory environment for any type of technology.
I'm sure several folks in the audience are all too familiar with navigating the privacy law patchwork.
For AI there is both good news and bad news. Right? So the bad news is that there's a new and growing patchwork to add to the mix.
But the good news is that companies really can leverage many of the same organizational tools and skills that is already established to develop an effective and efficient compliance plan that spans the various frameworks.
One key practice tip here is, it seems obvious but it's a really foundational aspect of building a universal compliance program and that's to build compliance strategies and programs around similarities and applicable legal frameworks. So after determining what legal frameworks and laws and regulations apply to your organization, do a mapping and decide what the similarities across those frameworks are.
Here we've been discussing several of the emerging legal and regulatory approaches draw from the same principles compliance program to find those similarities between the frameworks and apply across your organization.
Of course, the frameworks and apply across your organization.
Of course the nature of the patchwork means that there will also be outliers and a universal compliance strategy has to care for those outliers as well.
Looking very quickly at the EU AI Act and the Colorado AI Act side by side, I think it gives a good example of how this might work. There are a lot of similarities between the two laws, including that both group AI systems into categories by risk. So once an organization assesses where a particular AI system fits within the risk spectrum, in particular if it's high risk under the EU framework and the Colorado framework, then the organization will have to go about the business of finding overlapping similar requirements and outliers and caring for each.
So with that I think I'll pass it back to the Simmons team, for some more practical tips on navigating this patchwork.
Thanks, Kat. Yeah. Over the next two or three slides, we're just going to talk through some of the initiatives and strategies that we see organizations taking to deal with responsible AI, AI regulation, and particularly, across borders, how you deal with the fact that there are differing approaches across jurisdictions.
The first is a practical point about really needing to think about this across the supply chain. And what do we mean by that? Well, if you imagine you're the business in the middle, you are likely to have some third party element of AI. Most organizations don't develop AI from scratch.
That could come in the form of ready made solutions, AI components, models, for example, or cloud based solutions that you access. Whatever the form, you're going to want to think about the regulatory compliance of that AI. Has it been developed in accordance with applicable regulation? And that may not be the same regulation that applies in your jurisdiction to you.
Do you have all the information you need to make that determination? Because, essentially, you might be bringing a regulated item into your business. And in the same way as that could be anything else with AI, that also carries risks.
Then you've got to think about the internal use of AI. There could be regulation attached to how you as a business use AI. I think a good example is the EU AI Act, which says that where you're using AI in an HR context, for example, that's high risk under the EU AI Act. And you need to consult with workers before you roll out, for example, an AI system to monitor performance.
Equally, service providers. If you've got third parties providing services to you based on AI, you're going to want to do due diligence, ask some questions, potentially think about your contracts to ensure that they're complying with their regulations when they provide those services. And then finally, if you provide AI embedded services or AI products to customers, you may not necessarily be under a regulatory obligation in that relationship, but your customers are likely to have AI regulation applied to them. And so you might want to think about how you can help them to comply with their obligations. And, again, that may not be in the same jurisdiction as you.
This slide is intended to talk about a concept which you've probably heard about. You may have these in your organizations, but establishing an AI inventory. And it's something we're increasingly seeing organizations do. This is, with apologies, a very crude example built in Excel, of how an inventory could look. But you'll see there, it's designed to capture key information about an AI system. And we've given the example of the prohibited practices under the EU AI Act as some information that you might want to capture in an inventory.
Now it can be challenging to build one of these. You've got to decide what's included and what's not. So you've got to work out what is an AI system and what isn't. And that could differ as between regulations.
That's a challenge that needs to be grappled with. Then some practical challenges about how much information you capture. You don't want this to, be so detailed that it loses value. And, of course, it's only as good as the input that you get.
So you need to ensure that it's complete, that you have stakeholder involvement, that everyone's bought into providing information to make sure that this is accurate and complete. So there are some challenges, but there are obviously some advantages to embarking on a challenge like this and establishing an AI inventory. It's obviously a helpful register, a central repository of key information about your AI systems. And what that then enables you to do is deal with the myriad of AI regulations that apply because you can start to do what some of the concepts in the NIST RMS talk about, mapping, managing risk.
You can work out how regulations apply to your AI systems because ideally, you'd have captured captured enough information about those AI systems to work out, right, this one's high risk under Colorado, not high risk under EU AI Act. And then it can also help from a compliance perspective because you can capture the requirements. You can say, right, as a result of this high risk categorization, this AI system needs to comply with the following measures. And that can be helpful in terms of allocating responsibility internally and then also capturing that those steps have been completed to ensure regulatory compliance for those AI systems.
Moving on, another challenge we often get from organizations, and apologies, this looks quite complicated as a diagram, but I'll talk through it very briefly. The challenge we get is how do you adequately risk assess new AI use cases or tools within your organization, but also avoid that becoming a bottleneck, for example, from legal teams? Because organizations want to innovate quickly. They want to adopt and roll out AI quickly.
So we often help in building out use case and AI tool risk assessment processes to enable organizations to ensure that they're capturing regulatory requirements, but also ensuring that they can innovate, quickly. So you'll see here we've got, basically, a three swim lane process where you have AI that's coming in from a third party. You've got AI being developed from scratch and then third party solutions that are wholesale, so potentially a SaaS type solution. And then procurement, development, and use stages, and importantly, the deployment stage as well, and different things that you can do at each sort of gateway to ensure that you're capturing risk and complying with regulation.
And you can start to filter by red flag, amber flag, and also green flag because there are plenty of use cases in organizations that again, again, try to distill a lot of detail there in a few moments. But, hopefully, that gives you a sense of what a number of organizations are doing in terms of initiatives and strategies to innovate, but at the same time, ensure that they're complying with the myriad of international AI regulations.
And then we come on to softer governance measures. And, Emily, I'll hand over to you to say a few words about this.
Thank you, Manesh.
We wanted to talk a bit about AI governance because, as I'm sure you've, picked up through one of the themes of our, discussion so far has just been the sort of huge volume of new laws coming through and not to mention, of course, the fact that this is a very fast moving area of technology.
So having a a a strong governance framework in place, we think is incredibly important for an organization.
So just wanted to talk you through some of the elements on, on this slide. So starting off with structures and processes, we think this is really important to help guide a company on the use of AI within the business, to help, create policy, to, monitor, implement policy, make sure that, that's reviewed on an ongoing basis and that that that everybody's aware of that policy.
I think we've we've seen in recent weeks the, the sort of, the news around a a a completely new AI system, DeepSeek and and, you know, having these new technologies on suddenly coming on the scene, I think, would test test companies and their approach, and so that's why it's important to have a really robust, policy in place. And having a having a steering committee as well or some kind of, group of key stakeholders, to also kind of address all of the different elements. This AI governance is not is not just an issue for the legal teams. I mean, that's often obviously where it sits because it's very kind of legally heavy, but it's a team sport. It needs, it needs the input and the involvement of of people across an organization, IT, finance, security, obviously, HR, marketing. So it's really important to kind of build out a, a structure, a government structure that kind of includes all of those parts of the business.
Next really is another important issue is to think about ownership and accountability, to really establish clear roles and responsibilities within an organization so that everybody knows who who's looking at the development and deployment and monitoring of AI systems, particularly using tools that, Manesh has just just talked about to kind of make sure the company is still kind of on track.
Obviously, there's a big, kind of compliance component which, should be part of, any AI governance, assessing applicability of new laws that are coming through, and how those fit within existing structures and what might need to be changed. A lot of companies, have, good good frameworks and good processes in place, and so it's not maybe not a case of reinventing the wheel, but merely kind of making sure that, and there's any, appropriate changes are made.
Risk assessment and management is also incredibly important. As I mentioned, this is an incredibly fast moving area, and there needs to be an initial assessment of risk around new AI systems, whether they're being built internally or procured from external, firms, but also a continuous monitoring of not just the system and the risk, but whether additional data is being introduced to those systems, which might impact, on, on the issues that that the, company is facing.
And lastly, I just wanted to mention the importance of looking at, what's coming down the track as well. Obviously, there's a lot, coming through from various countries around the world, not just the, US and EU and UK, but but, you know, recently, we've seen a lot of laws, in different parts of the world. So, we publish a newsletter every other week just to kind of try and capture a lot of this. And so I definitely we definitely think that having a, having a a sort of eye to what's coming down the track is incredibly important.
So we wanted to just have a bit of time now, as we move into the sort of last part of our webinar just to, talk a little bit about some of the trends to watch, so kind of keeping with the theme of horizon scanning, and I wanted to start, with a few questions, for Lauren and Kat.
I think one of the things, we've certainly been kind of keep tracking quite closely is this sort of recent back and forth with the AI executive order.
Could you sort of put that into context a bit? I mean, what what does that mean in terms of federal action regarding AI going forward? Could you kind of help us to sort of, understand a bit more about that?
Absolutely. And I think everybody is, working to, assess what all of this means with the back and forth because there has been a lot of activity in, the past several weeks. Lauren mentioned it but I'll do a quick recap again. On day one of the new Trump administration, President Trump revoked the Biden AI EO and that had launched a great deal of work streams across the federal government Then a couple of days later President Trump issued a new AI EO.
In addition to articulating the policy that Lauren shared with us earlier, one thing that it does is it launches a review of all of the federal work streams that had been, launched under the Biden administration.
And the point of this review is to determine what efforts are going to remain and what efforts are going to be on the chopping block.
So right now we don't exactly know what the direction of federal AI policy will be from the executive because that review is ongoing. But I think we can look to past Trump AIEOs. President Trump in his first administration had two AIEOs actually, and this latest policy statement to see sort of where policy might shift.
We do expect for some of the more top down and prescriptive approaches to AI to be part of the efforts that are halted. So for example the Biden administration had those OMB memos that Lauren mentioned. They had fairly onerous rules for rights impacting AI and safety impacting AI and those rules didn't just apply to federal agencies but they also trickled down to federal government contractors.
We expect there to be a shift in that approach for sure. But other work streams may not be as directly impacted or may proceed sort of in the normal course. For example, in his first administration the Trump administration tasked NIST with various efforts and activities related to promoting AI competition and promoting, AI best practices and standards. So, NIST could very well continue to play an important role in this space.
But the real answer is it's still TBD.
But then even outside of that, you know, I think it's important to remember that other federal agencies sort of can act and and and can act on their own authority and have been acting on their own authority under a for AI issues for some time. So, we mentioned the FTC and FCC.
And so for those, again, we'll just have to continue to watch and see where priorities are shifting for sure, but activity will likely continue in some fashion.
And and that activity that we're seeing, how what does that mean for the states that, you know, seem to be have been quite active in the last year?
Yeah. It's a really great question. I mean, overall, the states have been incredibly active, in AI regulation.
States are considering legislation, promulgating rules, issuing enforcement guidance, about how existing laws already apply to AI. So they're hitting this interest in scrutiny over AI from a lot of different angles and we really just expect those efforts to increase and not slow down with the shift, in the administration at the federal level. I think overall the perception of or the lack of a definitive or unified federal action, so for example the fact that we don't have a comprehensive federal AI law, typically accelerates state activity because again there's the perception that there's a gap that needs to be closed.
And I think one really interesting trend to see and to continue to watch is that we're seeing that increased action at the state level not really fall along party lines. It's not a partisan activity.
For example, Texas which is traditionally right thought of as a red state, is considering an AI bill called the Texas Responsibility AI Governance Act that if adopted would define certain AI use cases that pose unacceptable risks and therefore would be prohibited.
And that approach really goes beyond what Colorado already enacted.
So again we're not seeing the approach to AI necessarily fall across party lines and so that's definitely something to continue to watch. The other big areas I'll just mention for watching are regulatory action at the state level.
So California currently is in the process of developing AI rules. They call it ADMT or automated decision making technology.
They're doing this under the authority from the privacy law but they're focused on AI.
And then Colorado is also undergoing a rulemaking process or or going to start a rulemaking process under its, new law, before that goes into effect on, in February of next year. So we'll look for those regulatory processes as well. Mhmm.
There's definitely a lot a lot of things to keep the track of over the the coming year. That's for sure. That's for sure. I know that when we were preparing for this, session, you had some really, really good questions about actually the, enforcement of the EU AI act, and I think we've had a question about that too. So, perhaps we could kind of we kind of talk talk to that issue that you talked about.
Exactly. Yeah. I mean, I guess our question on that is right in regarding enforcement.
We we really just like to hear more about the enforcement framework under the EU AI Act and how it compares to the enforcement framework under the GDPR.
Do you wanna pick that one up, Manesh? Because it's a very interesting topic.
Yeah. Absolutely. And we've got this fairly complex looking diagram, but I'm afraid it is reflective of the reality. Enforcement under the EU AI Act is going to be very complex, and it's different to the GDPR. So under the GDPR, it's enforced at member state level by data privacy authorities, and there's an important mechanism called one stop shop to ensure that you don't get multiple data protection authorities, investigating the same issue when cross border processing of personal data is involved.
Now with the AI act, there are two differences. The first is we have both member state level enforcement and centralized European Commission level enforcement. So that's the left hand side. Let's start with that. The EU AI office is a body that's been created under the European Commission, AI act as regards general purpose AI models.
So that's that regime in the middle of the pyramid that you remember. Now when it comes to AI systems, and that could be the prohibited AI practices or the high risk AI systems, that's going to be regulated at member state level by so called market surveillance authorities, MSAs.
But here's the bad news. There's no one stop shop mechanism, so you could have multiple MSAs across member states investigating the same issue.
There could be multiple MSAs within the same jurisdiction.
And then on the right hand side, you'll see fundamental rights authorities. There are going to be other bodies within member states who could also investigate certain issues relating to AI systems.
So on the right hand side, dealing with AI systems, you could be facing requests for information, investigations from multiple different authorities across different member states and only limited coordination and consist consistency mechanisms. So this is really going to be quite chaotic.
And all of this could kick off as early as August twenty twenty five. There's still actually remarkably a bit of uncertainty about when exactly enforcement can start, but a lot of organizations are operating on the assumption that August twenty twenty five is the relevant date for prohibitions and GPi models with the AI office, high risk AI systems, you have another year, August twenty twenty six. Emily, I don't know if you want to just say a few words about sort of the interplay and enforcement of other regulations because all of that's happening alongside this as well.
Yeah. Absolutely. I mean, we've seen no shortage of, the data private, data protection authorities across Europe kind of getting involved in this area. Obviously, as I mentioned before, AI, AI invariably involves personal data, so that brings it right within the scope of GDPR. So we've seen the Italian regulator take action against OpenAI.
We've lit just this week or a couple of weeks seen, that Coneil in France publish its strategic plan, and GenAI is a big part of that.
So, and the European Data Protection Board has also issued opinions around using personal data in AI models and development and and deployment. So it it's, data privacy is definitely a kind of, on those authorities, definitely worth keeping an eye on. And just more generally, other laws are relevant, for example, employment law where there's, kind of risks of discrimination from bias that may be part of these models.
And we're also seeing, other, relevant laws crop up in relation to things like security and so forth. So there's certainly a huge, there's there's a it's a busy, a busy landscape and definitely worth keeping a track of, working out, you know, and identifying what's what's going on.
I think that's probably a good point to sort of wrap up a few, key takeaways. I think, you know, really one of the things we wanted to kind of reinforce is do look at these principles, see where there's consistency across all of these, evolving laws, and and use that to help inform and implement, a a a governance and and compliance framework within your organizations.
I think the other key message is around the EU AI act. You know, don't wait.
We we know that, the kind of timeline looks quite long, but it's also quite it is a complicated piece of law, and so the sooner, you can start to analyze whether it applies to your organization, the better. And I think the final point, as I've mentioned is, you know, there is a lot going on in this space, as you know, and so really important to keep up to date, on on all of those developments.
So thank you, again to, the speakers.
Please do, reach out if you have any questions in the coming days, and, we we thank you for your time and hope to see you on future webinars. Many thanks.
