Search

Ethics Meets Innovation: The Human Side of AI Governance & Privacy – Emerald De Leeuw-Goggin – S9E36

On today’s Legally Speaking Podcast, I’m delighted to be joined by Emerald De Leeuw-Goggin. Emerald is the Global Head of AI Governance and Privacy at Logitech. She is also the Co-Founder of Women in AI Governance, a professional network advancing women’s leadership in AI regulation, policy and governance. Emerald has been recognised as one of the top 100 European Female Founders to follow. She has been awarded European Young Innovator of the Year, Privacy Executive of the Year 2023 and Women in AI Ambassador in 2024.

 

So why should you be listening in? 

You can hear Rob and Emerald discussing:

– Values-Driven AI and Privacy Governance

– Cross-Functional, Collaborative AI Governance

– Global Compliance Through Unified Standards

– Women in AI Governance and Inclusive Leadership

– Balancing Innovation and Ethics Through Stable Principles

 

Connect with Emerald De Leeuw-Goggin here – https://ie.linkedin.com/in/emeralddeleeuw

 

Transcript

Emerald De Leeuw-Goggin  0:00  

You can have the best policies in the world, but if no one’s reading them or understanding them, you are in a bad place. Of course, it’s also going to be a busy time for data protection and privacy people, but it kind of starts with making sure that that whole architecture is sound. Because if these things start going out on their own and making their own decisions and doing all of that, we better make sure that is all really secure. On top of that, making sure that you have the appropriate guardrails before these things are implemented. These are our responsible AI principles, and we will build responsibly by adhering to them. The technology can move along, but these principles are stable. That’s a very critical mindset shift for people. It’s not that you’re going to throw stuff that you value out the window because you have a new piece of technology in your hand. It can’t be that way

 

Robert Hanna  0:46  

on today’s legally speaking podcast, I’m delighted to be joined by emerald de Leeuw Goggin. Emerald is the Global Head of AI governance and privacy at Logitech. She’s also the co founder of Women and AI governance, a professional network advancing women’s leadership in AI, regulation, policy and governance. Emerald has been recognised as one of the top 100 European female founders to follow. She’s been awarded European Young Innovator of the Year, privacy Executive of the Year, 2023 and women in AI, Ambassador in 2024 so a very big, warm welcome to

 

Emerald De Leeuw-Goggin  1:19  

the show emerald. Thank you so much. I’m delighted to be here. Ah,

 

Robert Hanna  1:24  

it’s an absolute pleasure to finally have you on the legally speaking podcast. Really excited for today’s discussion. But before we get into everything you’ve been getting up to, which is rather impressive, we have a customary icebreaker question here on the legally speaking podcast, which is on the scale of one to 10, with 10 being very real. What would you rate the hit TV series suits in terms of its reality of the law? If you’ve seen it,

 

Emerald De Leeuw-Goggin  1:47  

I’ve definitely seen it. Of course, I’ve seen it. And, like, it’s a great icebreaker. But, like, obviously it’s like a fan fiction, right? But what is very realistic is how many people have pretend that they read the whole contract. So I kind of have to give it a two for that reason. And I think it has also taught us all that the standards for dressing are high, and it’s given us great skills when it comes to, I guess, slamming a file shot and leaving the room with drama. So we could arguably give it one more point for that. So between a two and a three,

 

Robert Hanna  2:20  

a very specific and justified 2.5 and with that, we’ll move swiftly on to talk all about you and your impressive journey. So emerald, would you start by telling our listeners a bit about your background and

 

Emerald De Leeuw-Goggin  2:32  

career journey? Absolutely. So I’m half Irish and half Dutch. That kind of explains the Emerald de Leeuw part, though I am married now, so Gog and I’ve hyphened on, but it often looks like super large and also like obviously, people have known me by this name for quite some time now. So I grew up in the Netherlands, did my legal education there, and moved to Ireland in 2012 to do an LLM. I specialised in that year on the GDPR, which, if you know the timeline for the GDPR, it actually only became fully enforceable from May 2018 so 2012 was really quite early, and maybe out of the sheer sense of always wanting to have some autonomy and having nightmarish ideas about having a boss, which in reality, really isn’t that bad, Trust me, I decided that I needed my own privacy technology company. The rationale for this was very much that I had spent so much time with this particular regulation that I was kind of going, Okay, this is really complicated. It is going to have global applicability. And who has time for this? How do I scale myself? So I also had a fair dose of imposter syndrome, because at the time, I was reading fairly technical papers about, like, data fragmentation in the cloud, because this was like, what cloud computing was becoming a thing, which just kind of makes me feel really old. But basically I did another masters, then in Business Information Systems, and I basically started my own company, which put me on a fairly interesting trajectory of highs and lows, but also gave me a lot of skills that are very useful to me today, because I’m currently chief privacy and AI governance officer at Logitech. Now I always say I like to have many spinning plates. So in addition to doing that, I’m also co founder of an organisation called Women in AI governance, and I’m also a blogger for the past two ish months, I would say, focusing on executive style and dressing. So I do a lot of different things, but I’m interested in a lot of different things, and it’s good to be

 

Robert Hanna  4:42  

busy, and that’s where we bounce off each other, because, like you, I like to have various different interests, and it’s incredible what you’re doing. So let’s unpack some of that. Then let’s start with with Logitech, because you obviously oversee the, you know, the AI governance and privacy. What does your role Intel there tell us a bit more.

 

Emerald De Leeuw-Goggin  4:57  

So log check is a company that’s active in home. 100 plus markets and many, many countries have laws that regulate data protection, privacy. Ai, so me and my team were responsible for ensuring compliance across all of those regions, but I’d like to say that it kind of the approach is not just law driven. We’ve taken a values based approach to our innovation, we’ve published responsible AI principles to kind of say, look, even if there’s no law, we will adhere to the principles of transparency, fairness, privacy, security and accountability. And the reason we do that is because it’s the right thing to do. The risks and potential harms of aI don’t go away if there’s no law, so making sure you have specific controls in place that do their very best to mitigate any of those harms, that’s obviously really important. On top of that, obviously AI governance is a huge area now, right? And of course, responsible AI is part of that. But there’s also just hardcore legal compliance with things like the European AI act of the being the most famous law and the most prominent one at the moment. So ensuring that products, services and internal efforts comply with those laws is a critical part of what we do,

 

Robert Hanna  6:15  

and a lot of moving parts as well, particularly in the world that we’re operating and with that, then just again, because it’s really interesting role that you have, and we try to get that career side thing, what does a typical day look like for you? If there is one?

 

Emerald De Leeuw-Goggin  6:29  

Well, I’m really, obviously, because I’m the head of the department now, a lot of what I do is strategic. So deciding, okay, what is it that we’re doing? What is important, what should be prioritised? How are we going about it? But then going a level deeper, I’m also still quite hands on. I don’t want to say that, Oh, I just delegate everything. Also, just delegating everything gets really boring and you lose your edge. So I would encourage any listeners to make sure you still get stuck in because, like, at the end of the day, this is moving super fast, and that’s always a challenge, particularly if you’re in technology law. It’s moving at such a pace, it’s very hard to keep up with everything. So making sure you’re actually still doing real work. So what does the real work actually looks look like? So it could be anything from ensuring your policies actually aligned with the current state of technology. And I actually did a LinkedIn post about this, I got a bit like it was very polarising, which, in a way, is a good thing, right? But it was about the concept of policy velocity, and ensuring that you actually update policy more frequently. Now, basically in terms of what is allowed and what’s not allowed and your opinion on these technologies. So that’s actually a big part of it, making sure that we’re responding to the market, making sure we’re responding to the business and what the business wants to be doing. I always am a big believer at running at the problem. So sticking your head in the sand and running away from your problems is a very poor strategy. Your problems only get bigger, so it’s much better to look at governance and compliance as a business enabler and giving people clarity on what’s right and what isn’t right and where they need to reach out, than being weirdly vague or ignoring it or being overly restrictive, because banning also doesn’t work, because then you get a shadow IT or a shadow AI problem. So making sure that you’re pragmatic and business focused, a huge part of it is training, because you can have the best policies in the world, but if no one’s reading them or understanding them, you are in a bad place. There’s obviously a contractual element to this, because everyone is developing AI terms, ensuring your third parties are actually aligning what you need them to do. If you’re the third party, making sure that you’re adhering with what your customers need you to be doing. And then there’s AI risk assessments, privacy impact assessments, all of the things that come with a robust privacy programme, and there’s responding to new legislation and tracking and monitoring that. So there’s a whole bunch of stuff. So maybe what’s helpful is if I kind of break down how our team is structured, because that might give you a little bit of a feel of like what this actually looks like in practice. So I lead the team. There’s a privacy operations team that looks after data protection impact assessments, specific privacy related training, data subject rights requests. So things like, I want to delete my data, I want a copy of my data, all of that stuff sits in privacy ops. Then we have what we call privacy legal for lack of a better word, we’re all part of the legal team, but that’s very much the actual council. So the lawyers who are responsible for contracts, legal advisory, policy, writing, all of that stuff. That’s not to say that these people don’t do training. They absolutely do. I have pro. Programme management. So obviously, project and programmes need to be managed across the board. And then there’s the AI governance team, which, of course, is growing and evolving at the moment, because it’s a new area, and that really consists of the people responsible for AI Act compliance and compliance with other AI laws, ensuring that our products and services comply with our own responsible AI principles, and that has been a structure that works really well, but the AI governance ops obviously sits in that team, and maybe in the future, that will evolve like privacy did. But I’m a big believer in responding to business reality, and so far this, we’re in a good place.

 

Robert Hanna  10:38  

Yeah, and thank you again for being so thorough on that. And I love the point around the problem. And we’ve had Penny Mallory come on the show, who was one of the UK’s first, sort of first female rally driver championships, used to race around the world with Colin McRae and various others. And she says, think about what is great about this problem, you know, and approach it from that perspective, rather than, like you say, running away from it and hoping things might get better. And I absolutely echo what you’re saying there, and you touched on AI governance, and I think still a lot of organisations are trying to tackle it. So again, going a little bit deeper on that, in terms of global AI governance frameworks that you’ve built, what tips or advice would you give for other leaders or senior people that are trying to think about actually building these out within their organisations?

 

Emerald De Leeuw-Goggin  11:19  

Well, the first thing I would say is stop looking at AI governance as something that’s like the separate thing from the rest of the business. And I don’t mean for that to be an abstract comment, because that actually really irritates me. I’m very pragmatic, but I think there’s a real sense that, like this is something that like this little team over here does, and that’s not how this can work. I think the reason that you’re seeing a lot of AI governance teams pop up is because somebody has to do the orchestration and somebody has to oversee that things are happening. But I’m certainly not arrogant enough to think that it’s just me and my legal colleagues who do AI governance. That would be a very bad place to be. So I think it’s to figure out that cross functional team that needs to be in place to actually implement the controls. Obviously, it’s the legal team’s responsibility to clearly communicate what the law requires and to figure out who in the organisation is best positioned and equipped to do those things, like, it’s not a legal team who does technical monitoring of things, or who does data cleansing, or, you know, it’s not what we do. Just need to make sure that the right experts are looking at those things and then making sure that you really have like, good senior stakeholders who are driving those pieces. Because there’s no point in trying to, like, boil the ocean on your own, so you really need to have a good sense of partnerships across the company. And that was also true for privacy, because obviously privacy, by design, comes with user interface implications, data collection implications, but I feel that AI governance is much more distributed again. So I think we always talk about soft skills and things like that. I really despise that, because I think they’re more difficult to gain, and often people are naturally good at some of these things. Absolutely learn them, but that becomes incredibly important, because your relationships need to be really, really strong. And then another thing I would say is make sure you’re not behaving like the data police, because that makes people afraid of coming to you, and that’s the very worst situation for you to be in. I would much rather be in a situation where people come to me with something that they’d rather not tell anyone, and that they feel comfortable enough telling me, and then we can go and address it, because everything can be figured out, but making sure that you have those relationships, and that you give people that sense that you can be a trusted partner to them, that’s worth investing time in.

 

Robert Hanna  13:58  

Yeah, and the culture piece is so so true, because that will bring out the trust piece. And, you know, be that collaboration partner, rather than someone like you say that’s the police. I think that can get people’s backs up. It doesn’t enhance adoption, doesn’t create fusion, which you want. And yeah, again, I remember when a mentor said to me very early on, Rob, when it comes to things, you know, there’s never anything wrong. There’s just something missing. And I think that’s very true in this regard as well. You know, sometimes, you know, it’s just maybe that piece of information, that understanding of certain things, whatever it might be, makes it a little bit challenging. Okay, now you’re part of a global business, and global frameworks, they ain’t easy. So what are some of the challenges you’ve seen in terms of aligning frameworks across multiple jurisdictions, and how have you managed to

 

Emerald De Leeuw-Goggin  14:38  

overcome them? Yes, I think that’s a really important question. I might use privacy as an example here, because obviously it’s a much more mature discipline and maybe more relatable to people who are listening. So obviously one strategy, and not every company will want to do this, but one strategy that I find is helpful is to look at privacy as. Business enabler and say, Hey, why don’t we just take our controls and implement them globally? So you’re implementing some of the highest standards globally. That doesn’t mean that you don’t have specific local things you need to solve, but it does make that a lot easier. I think that there’s a very strong argument to approach it that way, because people like to buy from companies they trust, and ideally from companies that aren’t creepy. And that’s, again, it ties back to your values. Like I kind of like to use a three step framework where I say values inform your culture, and then on top of that, you build your infrastructure. But if your values and culture are off, that infrastructure is not going to work. So I think that is also true for if you’re implementing these global controls, because you’re kind of saying to your organisation, hey, this is how we think about this stuff. We think about this we like we don’t want to just give people rights over their data in a place where they have a UK GDPR or European GDPR. We actually think people deserve rights to their data everywhere. Why don’t we just say this is a global control so that’s one of the things that we’ve also done, is we don’t say, Hey, can you like we’ll just hang on to your data even if you don’t want that. It doesn’t make any sense, because these people probably won’t buy from you again anyway. So I think there’s a real argument for just having a gold standard and implementing that globally, and then obviously taking a risk based approach to your other markets and making sure that you look at any local discrepancies and implement those specific controls in those regions. That’s one strategy, and I think that there’s a lot to be said for

 

Robert Hanna  16:39  

that. Yeah, and it’s clearly been working, and everything you’ve been doing seems to just touch the gold. So you know, congratulations on what you’ve been getting involved with. We’re going to talk about the Acts Now limit more detail and some of your kind of general thoughts, because the EU artificial intelligence act, European regulation, first, comprehensive regulation, really on AI. What effects have you seen, how it having on legal and tech industry specifically?

 

Emerald De Leeuw-Goggin  17:03  

Yeah, it’s a good question. I think that there’s a bunch of different things going on. I think a lot of there’s always a lot of people freaking out about new laws. So it’s very recognisable from GDPR. I think it really impacts organisations very differently. I think if you look at how it’s structured, right, there’s basically no risk. Then there’s what I would call transparency or deception risk, and then there’s high risk, and then there’s completely prohibited. Those are kind of the various buckets that your efforts could fit into. Obviously, almost all obligations are tied to high risk AI systems. And I think the panic is a little bit unnecessary, because a lot of the themes you see in those high risk AI systems, you kind of, in your gut, already know that that’s something you might need to look at, because it’s things like, you know, biometrics, emotion recognition systems using AI systems for hiring, promoting, firing. So things that like can really have a meaningful impact on people’s lives. So given the fact that the controls are really tied to things with I guess, big impact, that doesn’t really surprise me. I think where we so that’s kind of, obviously there’s a bunch of work you need to do if you do fall into those buckets. But I think if we are to give people protections in those scenarios, it’s not completely unreasonable to regulate those areas. I think then the other piece, then the transparency risk, that’s things like what it’s called Deep fakes in but that’s not just the deep fakes, as you might know them in popular culture. You know, it’s not just a politician saying things that they didn’t really say. It’s also images of objects that don’t really exist that should be labelled. So it goes quite far. And I was I was quite surprised myself when I saw how broad the definition of that was. But at the end of the day, you’re kind of labelling AI generated content, and that particular obligation comes into force from August of next year and again, that again, doesn’t seem unreasonable to me to just not want to not deceiving people with content, because I think everyone, and I, like, not in, not with my corporate hat on, I think everyone is little bit kind of going okay with how great this AI is getting. How are we going to know what I can actually buy? Like, is this really achievable? If I see beautiful imagery, is that real? Is this person real? We should at least no, like, I don’t think that it’s binary, that AI generated content is bad or that it’s good. I think it always depends on the context and the purpose, and also, in some cases, how transparent you are about it. But I think again, regulating that is i. Not that strange, but most of the stuff a lot of companies are doing is actually not that heavily regulated. So I think there’s a lot of panic initially, and then people kind of sit down and go, Okay, maybe it’s not everything that’s in scope of all of these obligations. Maybe it’s only certain things where the impact on people could be high, and that, to me, is very European.

 

Robert Hanna  20:24  

Today’s episode is brought to you by Clio. Are you frustrated with your current legal management software? You’re not alone, and 1000s of solicitors across the UK feel your pain. However, the hassle of moving all their existing client and case data holds most back from switching, prolonging the frustration, Cleo is here to help. Their dedicated migration team will be with you every step of the way while you transfer your information, and if you have any questions, you’ll get award winning support available 24, five by live chat, phone and email. So help is always there when you need it most. It’s no wonder Cleo is consistently receiving five star ratings with ease of use and top notch service. If you’re ready to leave frustration behind, visit clio.com forward slash UK to learn more and see why Clio continues to be the go to choice for solicitors across the UK. Now back to the show. Let’s stick with the European theme, then with the but from a sort of corporate perspective, thinking of the EU AI act, then, because there’s going to be lots of areas, or what areas do you think tech companies will struggle to comply with, particularly in relation to the act,

 

Emerald De Leeuw-Goggin  21:31  

I personally think that if you have your ducks in a row, you should be able to do it like it’s a lot of the time people not looking into this early enough, not like thinking that they’re gonna stop the clock. I know that there’s still some discussion about that, so I can’t comment on that personally, because obviously I don’t work at the European Commission, but that seems to be still a point of discussion. At least. I see a pop up in the media quite often, but I think you just like everything else, you need to get ahead of it. And again, it’s always people, processes and technology, and you see in that order, so figuring out, okay, who needs to be involved? What are the processes that need to exist for us to do this, and then are there technology solutions we can deploy that can help us? And obviously those markets are also maturing. Certainly, when I started in privacy tech, it was what 2013 privacy Tech is a really mature market now, and there’s a lot of stuff that can be used to comply, and some of those solutions might also help with some of the stuff that the AI act, and I’m sure there’s going to be even more advanced solutions in legal tech for compliance going forward, because with new regulation, in addition to people complaining about being cumbersome, it also creates entrepreneurial opportunities for people To innovate and create new things to help. Because no matter what the law says, visibility of what’s going on in your organisation will always be important. So technology probably has a role to play here. So there’s always an opportunity that comes with these laws as well. Yeah, I always say there’s going to be the

 

Robert Hanna  23:14  

complainers, then there’s going to be the contributors, the people are going to get busy kind of figuring it out and getting on with it, and something you’ve been getting on with and has been super successful. I want to talk about now is your women in AI governance. So what inspired you to co found it

 

Emerald De Leeuw-Goggin  23:29  

so we really weren’t that intentional about it initially. So it really came to be when my co founder and I said, look, it’s a lot of the same commentary that was like pushed into our feeds on LinkedIn. It was always from the very same people, and both of us know a lot of leaders in this space who may not have been the ones that were constantly popping up. So we said, hey, why don’t we create a LinkedIn group and see if people just want to have some chats and kind of share their ideas, because I think AI governance is more cross functional and more multidisciplinary than any of the other disciplines I’ve personally ever worked on. So it’s helpful to have philosophers, historians, people in procurement, technical people, lawyers. That’s certainly not just the lawyers who should be doing this, just because there’s a law, the law is really important, but it’s not the only thing that’s important. So what happened was, Lord, like, within a couple of weeks, it had 1000s of members. What is happening? Because I remember when I had my startup, I had to push, push, push, push, push, and everything was hard. And I looked back, everything makes sense, because I was so early, and also, honestly so green. I was just out of college, no idea what I was doing. But basically with this, everything just kind of seemed to be inbound, which is very different. And then we were like, Okay, we like, now we have to really do it. So. Um, there is now kind of like a membership website and things like that. But all of this is completely separate from my role at Logitech. I just want to be super clear about that. And basically, we’ve hosted various events around the world, and we have local leaders who kind of run their own chapters, because it’s not up to us to decide what’s important to them in their particular discipline or in their particular region. So it’s kind of just doing its own thing now to a very large extent, which is really great to see. And it all kind of started from something really small.

 

Robert Hanna  25:33  

And that’s how, you know, from acorns grow oak trees, as they say. And you know, it’s fantastic what you’ve done. And what I really like is, you know, I always talk about mission focused organisations and companies, they’re the ones that are really going to kind of cut through, and your mission is to foster an inclusive environment and true community where collaboration and shared expertise fuels advancements in AI governance. Love us absolutely love that, but tell us more behind the meaning of the mission for you.

 

Emerald De Leeuw-Goggin  25:58  

So I think a lot of the time, particularly if you work in the space we work in a lot of privacy people have started to take on AI governance. That’s kind of been like a trend in the privacy industry, and there’s absolutely nothing wrong with that. But as I said earlier, this truly is multidisciplinary, and it really does take a village to figure out what we’re going to do AI, it can’t just be the lawyer sitting there going, well, these are the things we think are important, because there’s also going to be the people who know an awful lot more about the technology itself, people who have seen maybe not AI, but things with similar impact, do things in the past that we might need to be considerate of. And I don’t think it’s just up to one group to decide how all of this should happen. So the whole intent is to make sure things actually get better over time, and you do that by bringing people together, and we try not to script it too much, and also to allow for serendipity to happen. So I think that’s been successful and really important, and I think also why people have gravitated so much towards it.

 

Robert Hanna  27:09  

Yeah, and hugely successful. And I couldn’t agree more, we is greater than me. And I think if you can get group of people together from different perspectives, different thoughts, it’s just amazing how you can fuel a really awesome community, which you have, what initiatives then, has the professional network been involved with to build diversity into tech policies?

 

Emerald De Leeuw-Goggin  27:28  

So obviously, we don’t pick our leaders based on, okay, how well known are you, or how many followers do you have, or how many years have you been in the industry? So we have been very open about saying, We want these positions to be held by people who would benefit the most from having that leadership position. So that’s very different than saying, Oh, we want the established leader to come and give us more followers, or blow things up. What people who are currently in college, who are leading some of these chapters, we have people in areas, literally across all continents, leading their own communities because they wanted to lead us, and they would benefit from the platform, because it has become a bit of a platform. And of course, it can grow more and it’s still fairly modest, but it has achieved quite a lot of things in the 18 ish months that it has existed. So I think that’s been the critical thing. We give the seat to the person who would benefit the most from it, as opposed to giving it to the person who would give the greatest benefit to us.

 

Robert Hanna  28:45  

I love that. And for the person that needs it links to everyone’s favourite radio station, W IFT and what’s in it for me and for that individual, that’s a huge benefit, like you said, and I love that kind of viewpoint. Okay, this is going to be a question everyone’s going to be interested to hear in the world we live when it comes to innovation and ethics, because AI, developments are often ahead of regulations and legislation. So how do you strike a balance between innovation and safeguarding ethics?

 

Emerald De Leeuw-Goggin  29:11  

Yeah, like, it’s on my hobby horse. I mean, if I had euro for every time that someone told me that I you know, the GDPR was going to kill all innovation, all of the stuff I like, I think we’re doing all right in 2025 when it comes to innovation, like JPR happened, and we’re still all here. So I don’t think it’s so much about balancing. I actually think that it’s a positive sum game, and both need to exist at the same time. So I think you can absolutely build responsible technology. We can have our cake and eat it too. It does mean that you need to first of all decide what’s important, and like we’ve, for example, done that by saying, okay, these are responsible AI principles, and we will build responsibly by adhering to them, and the technology can move along. But these principles are stable. People. So I think that’s a very critical mindset shift for people. It’s not that you’re going to throw stuff that you value out the window because you have new piece of technology in your hand. They can’t be that way. What I always say to people is, because people can be very cynical about these principles and values, right? They’re like, Oh, companies are only saying this. Sound good? No, it gives people a why in terms of a law. And, yeah, laws are important, but people respond very well to having a why. Why are you asking them to do all of this extra stuff? Why are we adhering to this like, what would be the consequences if we didn’t? Why is it important? How do you make a good decision at 2am. You going to check the GDPR? Are you going to listen to your values? That’s kind of the

 

Robert Hanna  30:46  

approach, and it’s, it’s hard hitting, but it’s so, so true. And so let’s dig into a bit deeper on that. Then, if companies are falling or failing to implement regulations, what are some of the repercussions?

 

Emerald De Leeuw-Goggin  30:59  

Well, obviously there is really high fines associated with violations of both GDPR, other data protection laws. There’s obviously litigation as well and but I don’t think that the stick is always the best approach right there’s a lot to be gained by doing the right thing. A lot of companies that are doing the right thing actually do well. You can do well by doing good, as we always say. So I think rather than leading with like the scaries, it’s always better to say, hey, if we do it this way, we’re doing the right thing. Going to maintain customer trust. We’re going to also just remain being seen as a responsible corporate citizen. And I think if we really do want to take the stick approach, yeah, sure, fines can be really high, but also your brand is going to be tanked, right? Like, if you go if and you spend a lot of money building that brand equity over the years, and it only takes one really poor decision to undo a lot of all that hard work. Now I always say to people in industry, maybe don’t lead with that. I don’t personally like that approach. I think the whole waving a stick thing, I personally don’t think is particularly effective, and I also don’t think I think most people actually want to do the right thing. And if you explain to them why something might not be a good idea, most people will understand if you’ve done a good job explaining it, yeah.

 

Robert Hanna  32:31  

And the other thing is, no one is perfect. No company is perfect. And I think, like you say, if you have the right integrity and you’re showing that you’re being responsible and you’re taking actions, I think that’s far better than burying your head in the sand or do it, you know, just waiting and hoping that things won’t happen to you. Because I think it showcases that you take things very seriously, and you’re very professional and what you do, and you know, like yourself, you’ve built out a team. Everything you’re doing is really well thought out. It’s strategic. It’s trying to be for the greater good, rather than, hey, we don’t want to get fined. And so I love that you’ll kind of have that more positive outlook on it. And you know, the benefits and everyone gains from actually introducing these things properly. Okay, future. I mean, the future is every hour these days in terms of something changing and happening, but what future developments can we expect to see in AI and governance in terms of those sectors?

 

Emerald De Leeuw-Goggin  33:18  

I think right now, everything is about the agents, right? And I think security teams in particular are going to have their hands full. Like that’s obviously a very big thing. It’s very hard to have privacy without having solid security. So of course, it’s also going to be a busy time for data protection and privacy people, but it kind of starts with making sure that that whole architecture is sound. Because if these things start going out on their own and making their own decisions and doing all of that stuff, we better make sure that that is all really secure. On top of that, making sure that you have the appropriate guard rails before these things are implemented. So you’d want to be looking at, Okay, where can we process certain data? Where does it require legal review, what use cases are regulated, and making sure that your teams actually engage with you before that happens. And the before bit is always important. But yeah, I think we’re really just looking at a lot of movement towards more autonomous systems that comes with a bunch of challenges, both from a data protection, security and governance standpoint.

 

Robert Hanna  34:31  

Well said, and this is the world that we live in, and we’ve got to have people like you to come on and help us navigate through these times and give us your thoughts and opinions as a thought leader in the space. And one thing I like to talk a lot about on the show is the show is skill stacking. Actually, from every experience, you can build a skill that potentially could be transferable, that could add value, whatever it might be. So for perhaps aspiring and current practising legal tech and sorry, legal and tech professionals who were looking to work in AI, privacy and governance, what skills do you think are. Essential

 

Emerald De Leeuw-Goggin  35:00  

develop your business skills. People are always very surprised when I say this, because they think I’m going to tell them to get another certificate. I’m not. I actually like a lot of things, particularly with AI now, right? Like we can look up a lot of stuff that’s knowledge based, that is becoming more democratised, which is probably a good thing, because people can start making sense of things more easily by themselves. That’s not me saying that lawyers are becoming redundant whatsoever, because we will always need trusted advisors who can actually strategically help you make decisions. But I think one of the reasons that I say business skills is, at the end of the day, your stakeholders are business people, and if you cannot align with their thinking and understand what their needs and requirements are, on a business standpoint, it’s going to be really difficult for you to work with them and be effective, because at the end of the day, there’s usually many roads that lead to Rome, and maybe you get Something pitched to you, and you’re like, No, absolutely not. But if you can understand what the end goal is, you can often help them get to the same business outcome in different ways. And that’s why I always say like, learn things like strategic decision making. Understand how to read a balance sheet that’s often helpful make sure that you have a bit of a sense of how products are actually developed, and what a go to market strategy might look like. All of those things, because the best legal people are people who understand their stakeholders well enough to give them advice that’s actionable, implementable, and helps them reach their goals without breaking the

 

Robert Hanna  36:42  

law. Yeah. And as, again, I’ve been taught for it’s not about you. You have to see things from other people’s perspectives in the world that they’re in, and what the impacts might be for them and those organisations. And so, yeah, love that. And you touched on it there. You said, lawyers aren’t going we’re always going to need lawyers. So what do you see the role of future lawyers and policy makers in terms of that evolution as AI becomes more and more prominent in society.

 

Emerald De Leeuw-Goggin  37:04  

I think one of my favourite approaches that I’ve read in a book called prediction machines. I’m not sure if you’ve read that book, but it’s really it’s a nice book. It takes in an economic view to AI, and it talks about, and we’ve all had this like in economics, right? In school, it talks about the complementary goods of AI, and they remain human judgement and creativity. And now there is an argument to be made that AI can be fairly creative, but we still need to decide, how are we going to deploy these tools, right? And also, what are we going to be doing with the outcomes? And in that book, they talk about that AI is making business predictions really cheap, but we still need and that used to be expensive because you had to hire experts in order to do these predictions in a somewhat accurate way, whereas AI obviously just predicts the next thing. But I think the complementary goods remain, having really good judgement and being able to be creative. And I think those are still fundamentally human skills.

 

Robert Hanna  38:10  

Yeah, really well said. And I absolutely agree. And I think the more you can lean into that creativity, even even the better in the world that we live. This has been brilliant, and we really enjoyed so many insights on a very technical discussion, we’ve been able to make it super relatable, super easy for our listeners and watchers to get value from today. So thank you ever so much. So if I could just ask you a sort of final piece of advice for the next generation of lawyers and legal tech folks who are interested in learning more about AI tech and governance, what resources or tips would you give them terms of that

 

Emerald De Leeuw-Goggin  38:42  

best way to learn is to get stuck in and just start using these tools. I think that’s that’s like, you can follow everyone and take in all the theory, but if you’re advising people or trying to develop tools yourself, it can be very abstract if you’re just reading about stuff, whereas if you are actually seeing how your stakeholders are deploying these tools and what they actually look like, it’s a going to become so much less scary and abstract, because when something is abstract, you might still feel very uncomfortable advising on it. Also do understand, like the underlying frameworks and what’s actually going on in those tools, they’re great, but they’re not magic. So the more the more you understand them, the less magical they will seem. But also it will help you become a much better advisor, entrepreneur, or whatever that is. So I think those two pieces of advice, and of course, there’s all the basic stuff, you know, read some books, listen to podcasts, maybe do a certification. I think certifications and things absolutely have a role to play, but I think actually their biggest role is giving people confidence, because I’m pretty sure that if you used AI to create your own study plan, you could get the resources. Is and do that self study. So I’m not saying that that’s absolutely essential, but sometimes it’s nice when you have a certificate that says this person actually knows this bunch of stuff. And it can also help with job searches and stuff, because some companies really do look for those. But yeah, I think there’s a place for formal learning, but I actually think really getting stuck in is the best way to kind of put your arms around this.

 

Robert Hanna  40:22  

Couldn’t agree more. Get stuck in use the tools. Fail fast. Upskill yourself. And you know, again, as mental said to me, the magic you’re looking for is in the work you’re avoiding. Like, get out there. Get involved with these tools. Start getting busy. Love it emerald. If people want to know more about your career, or indeed, women in AI governance, where can they go to find out more? Feel free to share any websites, any social media handles. We’ll also share them. This episode for you, too. Episode for you. Too.

 

Emerald De Leeuw-Goggin 40:43  

Wonderful. Well, LinkedIn is still my biggest platform, so you can just find me at Emerald deleuw, which is possibly the worst name to communicate on a podcast selling but there aren’t that many emeralds, so I should pop up fairly quickly. I’d also like to plug elivary, if that’s okay, is my executive strategy and style platform. Its tagline is, style is strategy, which is really true. It’s unfortunate, but it’s true, and that’s what we’re going to be talking about there. And of course, women in AI governance can be very easily found on LinkedIn also, but you can find me on most social media platforms, but mainly LinkedIn and Instagram.

 

Robert Hanna  41:26  

There we go. Emerald. Thank you so so much. Once again, it’s an absolute pleasure having you on the show. So from all of us here on the legal team podcast, sponsored by Clio, wishing you lots of continued success with your career and indeed, future pursuits. But for now, over and out, thank you for listening to this week’s episode. If you like the content here, why not check out our world leading content and collaboration of the legally speaking club over on Discord. Go to our website, www.legallyspeakingpodcast.com, there’s a link to join our community there over an hour.

Enjoy the Podcast?

You may also tune in on Goodpods, Apple Podcasts, Spotify, or wherever you get your podcasts!

Give us a follow on X, Instagram, LinkedIn, TikTok and Youtube

Finally, support us with BuyMeACoffee.

🎙 Don’t forget to join our Legally Speaking Club Community where we connect with like-minded people, share resources, and continue the conversation from this episode.

Subscribe to Our Newsletter.

Sponsored by Clio – the #1 legal software for clients, cases, billing and more!

💻  www.legallyspeakingpodcast.com

📧  info@legallyspeakingpodcast.com

Disclaimer: All episodes are recorded at certain moments in time and reflect those moments only.

Facebook
Twitter
LinkedIn

👇 Wish To Support Us? 👇

Buy Me a Coffee

Leave a Reply

Recent Posts