Reimagine Marketing: Ethical AI Meets Privacy: Avoid the Paparazzi Effect

In this episode of Reimagine Marketing, Steven Hofmans welcomes Mieke De Ketelaere of IMEC and Ruben Missine of Colruyt Group to discuss artificial intelligence and ethics. Steven, Mieke and Ruben explore the potential risk of AI for consumers and citizens today, how consumers can protect themselves from unethical AI, and how companies should organize to ensure ethical AI approaches.

STEVEN HOFMANS: Hi there, and welcome to the SAS Reimagine Marketing Podcast. My name is Steven Hofmans and I will be your host for today's session on artificial intelligence and ethics. During this podcast, we want to answer questions like, what is ethical and unethical customer experience? Do we, as consumers, need to act to have better ethical AI, and how do you organize yourself from a company perspective to have ethical customer conversations?

To do this, I have invited two experts in the domain of ethics and AI. The first guest I would like to introduce is Mieke De Ketalaere. Mieke is currently the Program Director AI at IMEC and has specialized in robotics and artificial intelligence during her studies. Over the last 25 years, she has worked for several multinationals, including IBM, Microsoft, SAP, and SAS on all aspects of data and analytics. She's a frequently asked speaker on the topics of digitalization, demystifying AI, and data privacy, and recently released a new book, Wanted AI Translators.

The second guest is a well-known person in the Belgium retail scene and probably sitting on one of the biggest data mines in Belgium, the shopping behavior of millions of Belgian citizens. Ruben Missinne is the Division Manager of Business Analytics, Intelligence, and Digital Transformation at Colruyt Group.

Colruyt is known and proud to be very active around ethical and sustainable business practices, and Ruben holds a Master's in history and business administration, helping different teams grow across departments and making an impact on customer experience and society. Welcome to the both of you and thanks for accepting my invitation to this podcast. I'm super excited to have you both here today.

MIEKE DE KETALAERE: Thank you, Steven.

RUBEN MISSINNE: Thank you.

STEVEN HOFMANS: So let's start with my favorite part of the show, your quote. You both have prepared a quote. And Ruben, I hope you don't mind the courtesy tells me that ladies should go first so, Mieke--

RUBEN MISSINNE: Absolutely.

STEVEN HOFMANS: --what is your quote, and why did you choose it?

MIEKE DE KETALAERE: So my quote for today was that digital privacy is like celebrity, so we should be able to decide ourselves when to use it when we want something, and then to turn it off when we don't want something. And I very much refer it to an attitude of a celebrity who decides when he or she wants to be seen in the external world and who gets angry when too much information is being given away for free about her or him, and so that's just like digital privacy to me.

STEVEN HOFMANS: Yeah. That's actually very interesting. I like the angle of the celebrity because today, you also have paparazzi, so you don't always control what is being shared of your private life and what is being shared. So very interesting angle, thanks for that. Ruben, what have you prepared for today?

RUBEN MISSINNE: I also brought a quote. Artificial intelligence has the potential of democratizing big data, but it will be the most trustworthy organization that will prevail, not the most competent. I see today a lot of organizations struggle with building up the craftmanship around AI, but I don't think, in the long run, that will make the difference. I really believe that it will be the ones who keep the trust with their customers who do this in a nice way, this whole AI thing, looking at data, using data, and giving back to the customers, those are the organizations that will do good in the long run.

STEVEN HOFMANS: Absolutely. So actually, it's also a callout to not the technique itself, where you have people who are very, very well driven in building models and being experts at that, but it makes me think of Mieke's book as well, AI Translators Wanted, that the people who are best in translating AI into meaningful value for the consumer that actually, they will prevail. They will win the business. Does that make sense?

MIEKE DE KETALAERE: Yeah, absolutely. And it's the fact is that it's more than data technology, it's the people and processes behind the whole thing all combined together that will make a successful company, increase the adoption rate.

RUBEN MISSINNE: That's absolutely also how we look at Colruyt Group towards this AI. It will never take over the role of a colleague of us. It will never do something a co-worker is responsible of. It'll help our coworkers and make sure that they got extra information, they got extra insights on a certain situation and the realities of our customers, but it will never take over the responsibilities that stays with the people within our organization.

STEVEN HOFMANS: To rephrase it there, then, it's like a smart assistant that is helping you, guiding you, but actually the decision is always taken by a human, a person that actually will then go for the best offer or the best solution at that time?

RUBEN MISSINNE: Absolutely.

STEVEN HOFMANS: Very, very, very interesting. A bit about your backgrounds. Mieke, you have worked for a long time in artificial intelligence. What was your first encounter with artificial intelligence, and what were the challenges back then when it comes to artificial intelligence?

MIEKE DE KETALAERE: Yeah, that's a very good question. In fact, my first encounter with artificial intelligence was in 1992 during my studies, where I was one of those lazy students that wanted to automate everything in my life. And so when I encountered the fact that artificial intelligence was basically a system that had the ability to learn by itself, to make decisions by itself, I was quite intrigued by the fact you could create systems that would give a forecast without doing many, many calculations yourself.

And so my first encounter was during my master thesis, where I was asked to create a system that was going to forecast the energy consumption of a certain region in Germany. And back then, I already realized that there was a direct link between the computer power that I had, the data that I had to my availability, and the accuracy of the decision it was going to make.

So it's something that escalated over the last 25 years, that those that had access to bigger systems and bigger data sets actually also won that level of accuracy. And so yeah, that was my first encounter and that was also the limitation that I saw back then.

STEVEN HOFMANS: So the access to data and energy or power will result in, back then, in competitive edge. How did that change today? Is it still the case or did it change these days?

MIEKE DE KETALAERE: Well, it was still the case of the last 20, 25 years, that those that got access to bigger environments to do the many calculations that the AI needs basically also had the opportunity to create models that were more accurate, which meet certain competitions or research tracks, which give them more money for true investors that give them, again, a possibility to increase their investments in data and in systems.

And it became a circle that was going around, and around, and around, because the focus was on accuracy of the systems, not on energy efficiency. And that's something we've seen changing over the last two or three years that we look behind the fact that our system should just be accurate. They should be accurate and they should be energy efficient. So it is something we definitely see changing now the last two, three years.

STEVEN HOFMANS: No, especially with the global warming, I think that's a very important topic. Thanks, Mieke, for the insight. Ruben, to your background. In 2005, you graduated as a historian with a thesis on the student life

in

Leiden

during

the 17th century, which actually is, for me, a very interesting background, looking at the fact that today you had a data analytics and digital transformation, two worlds that seem actually quite distant from each other. What sparked your interest for artificial intelligence?

RUBEN MISSINNE: Well, to be honest, I don't think that those worlds are so much farther away from each other. If you look at what we are doing as a historian, looking at that information from the past, looking at your sources, and checking them out, making sure they're reliable, and you're putting your conclusions down. In fact, what AI does, it's also all about understanding the context today. I think the world and society has gone a little bit more complex in the last past years, and we'll head up to do so.

So I think we really need technology. We really need solutions like AI to really understand our environment very well, or our context very well. So it's also about looking at information, making your conclusions, and bringing in the insights to the decision makers within Colruyt Group or any other organization in that fact. So I think it's a small step.

And to be very honest, it was not a choice for me to go and do this on AI. It was an opportunity within Colruyt Group, where I had the possibility to bring in some teams to create a new corporate department on information management within Colruyt group, I think already five years ago. And that included, of course, also data science and everything around analytics.

STEVEN HOFMANS: Very impressive. I think maybe for me, as an outsider, it seemed distant, but the way you explain it, you see that you base yourself on all data and then you try to make conclusions. Actually, historians make perfect sense to be drawn to the business analysis and the data world. And it's maybe a callout to historians to join the AI practice.

RUBEN MISSINNE: Absolutely. I don't think it's my fault, but within BA&I, we have several historians working.

STEVEN HOFMANS: It's interesting. It's interesting.

RUBEN MISSINNE: Don't think there's positive discrimination, but you never know, of course.

STEVEN HOFMANS: There's a higher likelihood that you get accepted. So if there are any historians on the show, please try Colruyt. I think there is a nice opportunity there.

Taking us further down to the ethics part, because the show is about artificial intelligence and ethics, Mieke, when we think about artificial intelligence and the dangers that come with it, people often think about weaponizing artificial intelligence, which is indeed a known risk. But when we talk about ethics in AI, when it comes to serving citizens and customers, how do you define what is ethical and what is not ethical, and should we be vigilant as citizens for artificial intelligence towards consumers?

MIEKE DE KETALAERE: Well, I think, first of all, ethics by itself is a very broad domain, so you can tackle ethics from different sides. You can tackle it from the fairness that all the major decisions are making. Who is going to define what's fair and what's not fair? Is that the person who creates the system? Is that the business who wants to implement the decision, et cetera?

So I'm requesting for that part to have a multi-disciplinary team that looks into it before, so prior to a system gets created, to define altogether what's going to be a fair decision, if it's from an HR perspective, or if it's from a decision that needs to be made on who gets hired and who doesn't get hired. And we see that fairness has a contextual link to wherever you live in the world or even in the regions, even in Belgium, fairness can be defined in a different way. Let me give you a simple example.

If we would look into a Google search and we would say we look for a CEO, typically in the past, there were only male pictures shown. You can say, OK, well, that's because the data that was used to locate the training for that system was mainly filled images from male CEOs. So Google had to artificially change that, but then they had a choice to make either they say, well, we make it 30/70 because there's 30% female CEOs and 70% male CEOs, or you can say no, no, let's make it 50/50 because in the world, there's 50% women, 50% men.

So who is going to define what the system should spit out? Is it 30/70, 50/50, or 5/95? So these are very difficult things to tackle and I think every company going into an AI system where you're going to make a decision on a humans or not on an industrial environment, you have to do this by design, which means at the time you're going to define the system, you're going to have to define the fairness part. That's a difficult one.

But then ethics has much broader field as well. It's about unconscious bias that might be in your system. So let's say that if we would dig the CVs from Colruyt, I understand, though, if you have studied history, there might be unconscious bias in the systems to make sure that the historians get the more quicker access to getting a job at Colruyt-- and I'm just joking.

RUBEN MISSINNE: Yeah, I was also.

MIEKE DE KETALAERE: That's how it works. It takes data from the past in order to make predictions towards a future. And so getting out unconscious bias that's in the systems. As well, it's not always on gender because people think wait, if we don't take the gender part, we're fine, but that's not what it really is.

For example, even email address can say something about your age. If I have an email address that ends with Google, that might say something about the fact that I'm between 20 and 50 years old, whereas my parents will never have a Google email address. So it's not that transparent as people think of this. It's not that easy as just taking out a couple of variables in order to make sure that you have no conscious bias in your system. So these are all points that needs to be tackled, so ethics is a very broad domain.

STEVEN HOFMANS: So I think the first thing I take away from your answer is that by definition, data is almost biased because of the way it's been generated, right? And what you're saying is when you are starting to work with data and towards humans and taking decisions, that somehow, there needs to be a human touch in there. And there's maybe a ethical or moral compass, like the example you gave, saying maybe we should do 50/50 for CEOs, while naturally grown because of society, it has been different in the past.

So does that mean we should have rules or somehow society defining what are moral guidelines to take into account when you are developing artificial intelligence, or how do you see that in practice?

MIEKE DE KETALAERE: But it's contextually different answers. Something that can be very well accepted here in Belgium might be completely different than India. And there is the question, so if you have your systems developed in India, or in Poland, or overseas, who is going to define that? So this transparency needs to be happening much earlier in the system and that's not done yet.

RUBEN MISSINNE: And so it's about being aware, I believe, about the context and then it's not about rules. It's what is implied underneath and what is the context that is creating around it. Would we be doing projects with a data science team to look at the usage of data, and which kind of data you should use for specific solution on creating as less bias as possible. And if you're creating any, then be very aware of it and make sure that it doesn't steer too much towards a specific solution. Absolutely.

STEVEN HOFMANS: OK. That makes sense. So bias is something very important. When I look at personalization in the cases that have happened, I always had one known story is the Target story. There is a rumor saying that one day, a dad called up the Target store or the Target marketing organization saying he received baby promotions, but actually, nobody's pregnant in his house. And a couple of days later, it turned out that his daughter was pregnant.

And while you can say it's inappropriate, the models were very effective in having those kind of promotions. Ruben, from an artificial intelligence manager's point of view, did you experience any ethical artificial intelligence issues when solving or handling requests from your business partners, and how do you make sure that everything remains ethical at Colruyt?

RUBEN MISSINNE: And I think it's very important that you know what is the level of craftsmanship and the level of maturity you have as an organization around AI. This is something new. This is something where biases get in. So if you're not aware of your maturity level, you're doing things that are on the edge. For Colruyt Group, we're really experimenting with these opportunities, where we're doing some nice pilots.

But those are pilots that not-- and in the range of creating problems like these, these are mostly the pilots we do, look at optimizing our internal processes, helping our co-workers that they can spend more time and interaction with the customers, for example, and so on. So I really think this is an example that speaks for itself. But if you're not totally aware of where you are with artificial intelligence in your organization, then it shouldn't try to do problems like this.

STEVEN HOFMANS: I think very clear what I see is that you actually using experimentation in the first place to see what is possible, and then, first, focus on improving your internal operations, which is actually a way to get to know what the possibilities are for artificial intelligence, and then you move on and once you're familiar with the topic, you move on to more complex issues. I think that is actually a very nice step for the audience on how to grow your artificial intelligence capabilities, analytical--

RUBEN MISSINNE: And you can already create some impact for your customers. For example, we did a test in one of the shops where, with a camera, we can determine which kind of fruit or vegetable somebody is bringing to the checkout register. By doing this and allowing the AI algorithm to recognize the products, the person who is at the checkout doesn't know the code by head and giving it into the system, but the algorithm sees this as a tomato or this is a banana, this is whatever.

This gives the co-worker the opportunity to really go into interaction with the customer and create an impact with our customers with somebody who has time to communicate a little bit to you, to be friendly, and so on. What we think at Colruyt Group is very important to go into a good connection our customers, and you create these kind of possibilities by bringing these innovations into that well-known checkout process. So you really can do something with an impact on the customer. We didn't focus on AI and innovations, but without creating too much trouble, absolutely.

STEVEN HOFMANS: I like that perspective because it's not only about how do you serve the customer in their shopping journey, but it's also about how can you put a smile on the customer face by doing small things and making time for him so you can interact for him. So that takes a whole different lens than when you're just looking at interaction points and trying to improve that customer experience, but you're really going by using artificial intelligence, freeing up time to get to that customer delight. I really like that lens that you're taking there.

If we go back to the organization of the society, Mieke, and how consumers can be protected from unethical artificial intelligence, I saw in your book you were talking about oath of Hippocrates. You have notaries and you have, in this case, doctors that need to lay down a note that they will respect the patient and do good. Do we need something similar to protect us from bad artificial intelligence?

MIEKE DE KETALAERE: Yeah, I think the responsibility should be with all people involved in the AI. So, of course, the oath is one that I proposed to bring in to the engineers, but I wouldn't say that it's only the engineers that need to take responsibility in order to be complete, but indeed. So engineers get all excited or data scientists get all excited to use technology to do something innovative, but they sometimes lack the complete picture of how it will end in the complex processes we have in our world.

And so that's where bringing ethics into the courses at University or having to sign an oath to understand the impact they can have with the solutions they're building, that's absolutely something important to highlight. However, when we look at the word fate which is always linked to ethics, I explain the fairness in the accountability, but it's also about bringing in the transparency and explainability.

And so in the oaths, what should also be there is that if you create a system that makes an automated decision, you should have the ability to make sure that as far as you can go with these systems to make them explainable and transparent. So I believe that by creating systems like this, it will increase the adoption ratio and it will also recreate a trust that's sometimes lacking right now. Citizens or customers or even business sometimes don't trust the systems because they're not explainable enough.

And so in this oath, next to the responsibility you take, it should also highlight the fact that, as far as you can go, make sure that you make systems that are transparent and explainable in order to reinstall the trust that's very hard needed right now.

STEVEN HOFMANS: So next to fairness, transparency is very, very important. Ruben, Colruyt, being a company that really finds doing ethical business practices very important, how do you develop an inner compass to guide you and your teams to giving transparent and fair AI to the different consumers? Is there is a kind of self-assessment they need to do, or how do you organize that to make sure that the values are in there?

RUBEN MISSINNE: I can see that the focus on ethics around AI, but to be clear, this is not something new for Colruyt Group. AI activities are activities such as we have any others within the organization, and everything we do is based on the culture and the values and the identity we have as Colruyt Group. So it's not because, oh, here comes AI, now we need an ethical framework, no, we have our framework, we have our values such as simplicity, respect, togetherness, readiness to serve, and so on.

And those are stay and will be the guidelines for all our colleagues to handle the activities they approach within Colruyt Group. And of course, this also stands for the data scientists who are building up the models and doing AI, but it's not something new.

I think it's very important as an organization that you really make sure that those fundaments are created are into place and you can translate those kind of values towards frameworks on the rules and how to handle things. But it's important that the framework is shared, and that's already a very big step, I believe.

STEVEN HOFMANS: So what you're saying is it needs to be embedded in the culture, in the company culture, and when it's embedded in the company culture, it's not because there's a new kid on the block. And being the combination of ethics and artificial intelligence, that you should panic as an artificial intelligence manager normally. Those things are covered from a corporate culture. It should be in your values and you should translate it to the practices you're doing. But if you have that covered, probably that should be fine for an organization. Is that correct?

RUBEN MISSINNE: Yeah, I believe so. If you have these strong fundamentals we have, a very cultured organization, and we put a lot of effort in our values and making sure that everybody knows them, everybody follows them. Everybody has the opportunity to really learn what they stand for, and how they can be applied. Absolutely. If that fundament is there, that's OK.

MIEKE DE KETALAERE: And I really liked that statement, that it should be in culture from right from the start because you see too many companies that now are creating, around the AI, ethical codes and ethical committees, and that's just a tick in the box. You really feel that it's just to have something in case they got a question from the press. But like Colruyt and Ruben are doing, it's absolutely great to be an example for many out there that you just put it into the basic culture of company.

STEVEN HOFMANS: Yeah. Even one interesting thing I saw was a self-assessment of the European institution around trustworthy artificial intelligence and for me, it was I know Europe is busy on it, but they're also working on framers, and they're working on those practices, but actually, if you have it governed at the right level in your company, there shouldn't be any involvement or any further steering from Europe. Is that the way I can understand this as well? That the self assessments of Europe, they're interesting, but actually, every company should find out themselves what is ideal for their customer, yes or no?

RUBEN MISSINNE: I really think this can help, such an assessment from the EU. It's like a checkbox. You can bring it over, look at your activities and what you are doing, and making sure that all the things that are out there are also taken into account within the activities you have as an organization. But once again, when the fundament is right, it just a reference check afterwards, then it is something that should be implied.

STEVEN HOFMANS: OK. OK.

MIEKE DE KETALAERE: I think it's a bit more complex than that. I mean, our AI's no bigger than medicine. At the moment, you can't be an AI expert. There's so many flavors to AI. There's so many sectors involved in AI. And I think that Europe is doing a fantastic job in making the first steps towards that ethical framework.

However, what I see, and we also analyze that in a questionnaire towards companies, is that the translation on the how to do it. So it gives us a good answer to what needs to be done, but not how to. I think we can meet the next year to translate into how to because whatever you're going to use, if it's an image recognition, voice recognition, whatever, there are different technologies behind it to make it transparent.

You can't expect a company that starts with AI to know all these techniques. So we are currently helping companies through a structured approach and methodology to find the right techniques for the AI solutions they're implementing, and so that's where I think we still need to work and collaborate from an engineering point of view together with Europe to put up framework into a workable methodology.

STEVEN HOFMANS: OK. So Mieke, continuing on that point, my last question is, is how should companies then prepare for what's coming for the future when they didn't think about we need to still do business in 2030 with the setting we have, what should they prepare for? What is your best tip you can give them to the audience?

MIEKE DE KETALAERE: Well, I think what it is is you see they're jumping too much still on the hype. We are walking through Disneyland getting inspired by use cases we've seen in the press, et cetera, not knowing what's happening behind the scenes. So I think if there would be basic knowledge at all levels within the company of what AI is and what it's not, and based on that knowledge, create a multidisciplinary team around it and have the discussions on what they're going to develop by design.

So that's ethics by design. That's security by design. That's customer experience by design, et cetera. I think if this would be in place, I think the solution that will be out there will be less intrusive and it'll be less dangerous as what we see right now. So it's bringing everything back to the design phase on the table and then talking a multidisciplinary approach about it. I think that's what's needed.

STEVEN HOFMANS: OK. Interesting. I would like to really thank you for this very interesting discussion. My key takeaways for me are fairness, it was one of the key words that was mentioned a lot in this conversation, transparency towards your customers, and transparency on data, and then multidisciplinary teams that can help set the right boundaries around ethics. I think that's also very interesting. I would like to thank you again and on this bombshell, I wish you all a pleasant day.

RUBEN MISSINNE: You're very welcome.

MIEKE DE KETALAERE: Thank you, Steven. Thank you, Ruben.

RUBEN MISSINNE: You as well.

Reimagine Marketing: Ethical AI Meets Privacy: Avoid the Paparazzi Effect
Broadcast by