Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Unpacking Transparency & Trust in Today’s AI Models

What happens when you lead with trust? Unravel the mysteries of AI as we enter the world of machine learning with Emmanuel Raj, Lead ML Engineer at Relex. Promising to enlighten us, they will shed light on how their innovative AI platform is revolutionizing the customer experience and business operations. Discover how AI is not just a buzzword but a powerful tool that’s enhancing productivity, intelligence, and our quality of life. And for those who fear the ‘black box’ of AI, don’t fret. We’ll discuss why transparency in AI is not just desirable, but essential. 

relex solutions

As we pull back the curtain on AI, we probe into trust and bias issues, which are gaining increasing significance in the AI and ML landscape. Our expert guest, Emmanuel Raj from Relex, takes us through the process of ensuring that AI systems are fair, trustworthy, and free from bias, emphasizing the need for human-machine collaboration and understanding the reasoning behind model decisions. We also delve into the candid conversations with clients about the trust and transparency issues that AI presents. And as we wrap up our enlightening discussion, we highlight the role of tools like Chat GPT in fostering education and trust in AI systems. So, sit back and gear up for a journey into the captivating intersection of AI, trust, and education.

Full Transcript Below

Mike Giambattista

Advertisement

Maybe just for context, because ultimately, i think we want to have a conversation that centers around AI not only Relex and capabilities there, but you know some of the broader topics surrounding this. Why would you call it exciting and yet hugely controversial theme? 

Emmanuel Raj

So yeah, I’m a lead machine learning engineer at Relex. And one of my main jobs is to build the AI platform we’re building at Relex. So this is a modern platform that is able to fasten the AI development lifecycle, So we are able to develop AI models much faster and deploy them at scale much more faster than what we were capable of. And now we can plug and play our AI models using the AI platform to redact solutions And overall it improves our customer experience and intelligence of the redact solutions. In general And when it comes to day-to-day activities of mine, I have a team of machine learning engineers And we mainly focus on building this platform in a robust and scalable way. So, yeah, a lot of time goes into development, understanding how our customers work and to be better able to serve our needs for the customers. 

Mike Giambattista

So just for clarity, on this side, you’re using AI in two ways. One is to more quickly and efficiently develop your internal capabilities, to develop the platform and then deploying AI-based solutions on behalf of your customers as well. Is that correct?

Emmanuel Raj

That’s correct, yeah. 

Mike Giambattista

Okay, good, just wanted to kind of clarify that. So, as someone who is deep into machine learning and AI and what it can do, and probably even its limitations of what it can’t do right now, I’m very interested in your take on. Let’s just call it what the broader news cycles are talking about, because you hear everything from well. First of all, in the technology world, especially in retail, if you don’t have an AI capability on some level, it’s almost like you won’t be considered real. So that’s one thing, but on the far end of the spectrum, there are all kinds of people out there, some with some credibility, who are saying this is basically the end of humanity. Right, I personally it’s a little bit overblown, but let’s boil it down to what it looks like within the scope of what you do at Relex, how you’re deploying AI And if we don’t want to get too deep into your own secret sauce but how you’re deploying AI and what you see that it can do for Relex and your clients right now, versus what it’s probably not capable of in its current state. 

Emmanuel Raj

Yeah, it’s perhaps a very interesting question as to how people perceive AI these days, and in the last three to six months, a lot has happened, has it? And so right now, my take is that a lot is happening as we are getting into this AI revolution and in the evolution of humanity, we had the industrial revolution. Now we are starting to have the AI revolution, so we see a lot of smart models roll out. But, yeah, that’s happening and it’s inevitable and it’s in our hands. How we steer this engine to better help improve the productivity and quality of our lives. 

And when it comes to Relex and retail industry in general, yes, these days it’s important to get the best out of the data we have. So thereby we have AI and machine learning on top of the data to provide intelligent insights to our customers and provide better forecasts to steer their operations. 

So we have wide range of solutions when it comes to merchandising, supply chain and operations. So some of the models include having features such as end-to-end supply chain planning, which involves inventory planning, to store planning, to having planograms sorted and planned, and also optimizing the workforce as well within the stores. So it’s a bunch of all these kind of operations going on as we provide end-to-end services using AI and also other techniques to our customers, and one of the things we are keen on focusing in Relex is to provide as much as transparency and collaboration as possible, and we give our customers the front lead to customize and build their own solutions, also on top of Relex solutions. So that’s the approach we take And we have these sort of AI solutions going on the platform and, yes, of course, in the retail industry these days, if we don’t use AI, we are a step back, because it’s the age of data and what we do with data determines the quality of business at the end. 

Mike Giambattista

You brought up something really interesting a moment ago, and that is that you pursue this technology through a lens of transparency, and it would seem that a lot of the anxiety out there exists because of this kind of black box concept. You’ve got data inputs that you can understand, you’ve got data outputs that you can understand, but what happens in the middle? there is scary stuff, because very few people on the outside really understand that And I think that’s kind of the source of so much of this anxiety. That’s right.

So how do you pursue this with a level of transparency, because it’s one thing to say that here’s the model, but it’s another thing to say here’s our code base, which opens up the hood, maybe a little bit too much. So how does Relex really handle that? 

Emmanuel Raj

Yeah, these days it’s a common problem where black box models are surfacing day by day and it’s hard to understand how, why a model perceives an output to be based on the input. 

So yeah, because of these black box ways of predictions, it’s hard to understand for a customer and comprehend in a trustworthy way as to why a machine is doing such a decision. 

So, yeah, at Relex we take an approach where we are able to justify our decisions and we, of course, we have a lot of techniques in general that we can see in retail industry that there’s no one way to decode a black box model. 

But there are certain techniques, such as looking at feature importance. For example, if a model is predicting in a store that for the next three weeks we need a thousand oranges, we need to be able to know why it says so. So it depends on the data that has been input to the model. So if we have features that enable us to understand the feature importance meaning that what parameters in the data have contributed to the model predicting such a decision helps us to understand these black box models better. So there are various techniques to understand these black box models, such as SHAP, Blime and these kind of techniques used in game theory and other means to simplify the decision making of a model for normal consumers or even for the users of our products as well. So we take this approach to demystify as much as possible and, of course, not always we need to use black box models. Sometimes there are different kinds of models and there are different types of levels in which AI can be implemented, so there are two or three different approaches that can be taken. 

Mike Giambattista

Would you mind, just for the layperson, just a brief explanation of the SHAP and LIME approaches. 

Emmanuel Raj

Yeah, in simple terms, these methods help us to understand the feature importance when a model predicts a certain outcome. It shows the relationship on the input data as to which feature has contributed to a certain decision more in terms of percentage. Let’s say there are four or five different parameters that go in as input data and the model predicts a certain value. So it shows the correlation to these four or five parameters as to which parameter has contributed more to such a decision. So then we can understand oh, because of this being high in that particular feature, this prediction might have happened. Even though it’s not 100% transparent, in a way that it doesn’t show rule-based explanation, but still at least we are able to understand it to some degree, based on the data, the machine, how it makes predictions. 

I want to take a quick break from the conversation to tell you about one of our sponsors. What could you achieve if you knew what your customers expected ahead of time? What if you could know what customers expect by category and by brand, 12 to 18 months ahead of traditional brand tracking methods? And what if you could know exactly where to adjust and where to spend in order to derive the most benefit? every time, a customer expectation audit allows you to identify areas that require strategic reinforcement, as well as pinpoint which values will contribute most to an emotional bond with your brand and optimize accordingly. Customer Land has partnered with Brand Keys, the world’s oldest loyalty-focused consumer research firm, to bring real-world customer expectation audits to brands, brand managers and to CX. practitioners everywhere. Want to know where your brand stands and exactly what to do about it. Go to expectationaudit.com and download a sample audit today. 


Mike Giambattista

I’m going to try and take this back to a broader set of concerns, if you will, but a year or two ago, when we brought up the idea of AI, there was excitement and a buzz, but there were still some concern out there about inbuilt biases, because these are human beings that have programmed this stuff And it’s hard to escape your own biases. You can’t see what you can’t see. It seems to me that these processes Lyman and Shaft, for instance helped to bring some transparency. That would help to identify some of those biases. But what else can be done besides explaining here the weights of these various parameters to identify what those biases are? 

Because we’re about to and I’m just going to ask you probably an unanswerable question here. We are, i think, on the precipice of an explosion in AI deployments, which means and we already are, in fact, i mean, and they’re just popping up everywhere. So is there a way to understand these biases, understand these weights, if you will, at scale? I mean because it seems like, if you can, if somebody comes up with that technology, you’ve really got something, because I think it would add to the credibility of the output one. It would help people to understand why certain predictive models are predicting what they are, but I think it would help to decrease the overall anxiety levels behind the technology as well. 

Emmanuel Raj

Yeah, that’s right. It’s an important area where we need to understand certain biases and perhaps deal with those biases to make fair systems. So, yeah, rather than techniques, i can overall talk about the process that we can follow to ensure that systems are more fair. So, when we talk about AI, there are multiple definitions, as we can see, but in simple terms, i’d like to keep it in this way where an AI system or an advanced intelligence system is a combination of three things One is the data that goes into training that system. Second is the machine learning or the model itself, and the third is the human in the loop. 

So a lot of the things that happen these days is companies focus on data and model aspect but tend to forget the human in the loop aspect. So, in order to treat these biases, what about we make these human machine interfaces that we are able to see as humans why a model is making certain decisions and we can give feedback to the models as to why this decision might not be correct and maybe detect certain bias and rectify and give overall this feedback And this model can go into this continual learning mode. So by implementing these kind of processes, we are able to overall study the bias on local values and also the global context for the model with the overall data and deal with rectifying it And, overall, these kinds of interfaces which provide human machine collaboration are able to one, show us the bias, help us rectify the bias and, three, move forward in building fair and trustworthy AI systems. 

Mike Giambattista

When you’re interacting with your clients do you find that bias is a topic that comes up, or is it, or is this something that is basically just media hype and anxiety? 

Emmanuel Raj

It does come up and there are cases where we need to understand why certain forecasts are made. So, for example, a simple example is in a store if there are 100 oranges left and the store owner needs to make an order with the supplier to get oranges for the next three weeks, then our system is able to give them certain forecasts, for example, order 1000 oranges for next three weeks. So the user, which is the store manager, needs to understand why this forecast, based on what. So we provide these interfaces where user is able to understand, based on the historic data, why our system is able to predict certain forecasts. and, yeah, it’s getting better and better day by day our forecasts because of this human machine collaboration that we have going. So, yeah, it’s an important area for us and we try to deal with it on a really basis to build fair and transparent and trustworthy systems. 

Mike Giambattista

One of the things that I appreciate about your approach is your willingness to upfront deal with trust and transparency issues as it relates to AI and ML, And I had you know we talked to a lot of technology providers here and everybody’s got an AI thing right now And some of them are quite interesting and fascinating, but I’ve yet to speak with anybody who just so upfront in the dialogue was willing to talk about the trust issues, And it seems to me that’s a really big deal that is largely ignored because it can get uncomfortable if you’re not fully transparent about that. 

Emmanuel Raj

Right, i agree, and it’s a very important area that is often ignored, which is customer satisfaction, and customer satisfaction is built upon the trust between the product and the customer as well. So it’s an important area for us and we aim to improve on it day by day to better the customer satisfaction overall, using this, this angle in AI Question for you. 

Mike Giambattista

You’re a, you’re a retailer, you’re a supply chain manager, and you may or may not be dealing with Relex right now, but how, how? how would you suggest that people in those roles start thinking about AI as it relates to supply chain? 

Emmanuel Raj

Yeah, the approach can be simple. Nowadays AI is available readily and we need to start being an adapter rather than a resistor. So that simple mind shift and understanding the processes of the future, or at least being open to, to spot those and have these pragmatic, this discussions when it comes to the culture overall within the organization and also building these processes and systems. For example, what we are building, the AI platform at Relex. It’s a platform where we can deploy, deploy models at scale And this platform is a place where people in the company all of us can work on that platform, see transparently the performance of models and we can have open discussion And also bringing the customer, whenever they want to understand why, a model data search and decision to have this complete audit trail. So this kind of a mindset shift and having these platforms that enable these trustworthy systems with collaboration with customers could be a way forward when it comes to progressing in this age of AI. 

Mike Giambattista

You’re talking about a pretty significant shift in mindset You’re talking about, you know, if there’s resistance, it’s probably because there’s a lack of understanding. Largely so, in my mind. That would require some significant reeducation on what this one, what this can do, which is which, in my experience, is time consuming and thus expensive to do. So maybe it’s a question for your sales team and business development people, but, you know, identifying people who are ready to accept what this technology can do and probably be a big step in in growing what Relex does for people. But educating people is not easy. Do you have a method? How does that work there? 

Emmanuel Raj

Yeah, it’s not that easy and like look, humanity evolved. When industrial age started, people had to learn new ways of doing things. Now AI age is here. People have to learn, but look, there are amazing tools available, irrespective of the information out there. We have something like chat GPT, which is a smarter search engine so people can ask questions in their own way and Interact with this AI, and it is able to teach them, give the right information in a condensed manner. So these kind of tools, for example, can help foster this education, and Having this transparent AI systems can help build this trust for people and engage them more towards Going to the world where we can engage in them in a much better collaborative way and also adapt to. 

Mike Giambattista

Yeah, I just want to call out again that Emmanuel Raj From Relex is probably the only person on this podcast and in the entirety of the interviews we’ve done who is Right up front talking about the transparency and trust Issues inherent right now with a lot of AI, so manual. Thank you so much for your time and your thinking and what you’re doing. I Happen to love your approach and we’ll be in touch to talk about this more, I hope. 

Author

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Post
meeting customer expectations

Passikoff: Meeting Customer Expectations = First Identifying Them

Next Post
Snack 50 Psych-Pulse

Unpacking the Psychology of Snacking

Advertisement

Subscribe to Customerland

Customer Enlightenment Delivered Directly to You.

    Get the latest insights, tips, and technologies to help you build and protect your customer estate.