The UnwAIred
Global | En
MENU
Global | En
MENU
The UnwAIred
Uncover AI's real-world impact on business with insights from industry experts.
Podcast
The Journey to Responsible AI
Welcome to another episode of 'The unwAIred' podcast, where your host Frankie Carrero from VASS engages in a dynamic conversation with Richard Benjamins, Chief Responsible AI Officer at Telefonica, and Co-Founder at OdiseIA. In this episode, Richard, and Frankie, explore the concept of Responsible AI, breaking down its definition and role in companies, and sharing successful examples across industries. Richard provides exclusive insights into their role as a Chief Responsible AI Officer, discussing the challenges of balancing innovation and ethics in the ever-evolving landscape of technology and data. Join us for a concise yet insightful discussion on the critical aspects of Responsible AI.
Conversations / The Journey to Responsible AI
Frankie Carrero: [00:00:06] Hello and welcome and welcome. I'm Frank Carrero, director of data and AI at VASS, a leading digital solutions company. We are present in 26 countries in Europe and the Americas and Asia, where we work alongside our clients, partners and key industry players. Together, we deliver the best in class digital innovation that shapes the landscape of banking, retail, insurance, public administration, utilities, telecommunications and of course, media sectors. In today's episode, we're venturing into the heart of ethical technology with a focus on the responsible AI company. Following our approach based on mindful technology, we will unravel what it means to integrate responsibility into the AI framework of a business. We'll also delve deep into the challenges and triumphs of building an AI that aligns with human values, and we will discuss why. A strong commitment to ethical standards is crucial for the future of AI. Joining us today, we have a special guest, Richard Benjamin. He's chief responsible AI officer at Telefonica and co-founder at Odisseia and at this major telecommunications firm. Richard has pioneered a culture that not only drives innovation, but also places a high priority on human welfare. Together, we will explore how companies can evolve into pillars of responsible AI, establishing new benchmarks for the industry and ensuring that as we progress technologically, we will stay true to our ethical foundations. Richard, thanks for joining us today.
Richard Benjamin: [00:01:39] Yes, Frankie, thank you very much for having me here. Really, it's a pleasure.
Frankie Carrero: [00:01:45] It's a pleasure for us. And I think we're going to get started straight away into the conversation. I'm going to if it's if it's okay, I'm going to ask you to explain to our family what is this exactly? The responsible AI? And what is the role that it takes in a mother company? So is your turn now, Richard.
Richard Benjamin: [00:02:06] Okay, so responsible AI, for me, is a kind of short name for the responsible use of AI. Because it's not a technology as such that is responsible or not responsible. It's the use of this technology by people and organizations that makes it responsible or not responsible. And there are two aspects related to responsible use of AI. The most common one is to use technology and artificial intelligence in a responsible way. And that means that, um, while pursuing the objectives of an application of an AI system, of any system that AI forms part of, part of pursuing the objectives in terms of benefits or economic impact, also considering the potential negative impacts on people, society or the environment. So it's looking at the wider perspective, not only, let's say at economic benefit, but also looking at the other aspects and trying to avoid them as much as possible. Usually we call that from design, so we call it responsible use of AI by design because it's much it's much easier to correct things in an early stage that once you are in market and you see there is an issue, and then you have to withdraw the product or and handle a lot of, uh, communications, which are hard to handle. Now, that's one part of responsible AI. The other part, I call it AI for good. So it's not only that you can use this technology AI for, uh, for for benefits of companies or organizations, but actually you can also do great things with it for society and solving basically of helping to solve the sustainable Development Goals of the United Nations.
Frankie Carrero: [00:03:59] Well, I would say that introducing the word use in the definition is the correct thing to do, because we've all heard a lot of people saying an AI is doing this and AI is doing that. And the fact is that AI does things, uh, for humans, there are humans behind an AI making the decision of what to do. So yeah, totally agree with your observation. And I think that it's really a good one. Now, we have now that we have established a first definition that anyone can have an idea of what responsible use, like you say, of AI is. Um, well, we have to say that it's only a recent subject. Um, there aren't there are not probably many cases out there, but have you seen some successful examples of responsible AI implementations that in different industries, or do you have some examples in your head that you can you can tell us to illustrate what responsibility is.
Richard Benjamin: [00:04:58] If you look at the first part here, so let's say also called the ethical use of AI. Uh, I mean, of course, if everything goes well, there is no responsibility issue. There is no ethical issue. So you won't hear about any anything about it because everything is fine. Yeah. What you hear about is when things go wrong. Yeah. So of course there are hundreds or thousands of examples of responsible AI, which means that nothing is going wrong. Yeah. Unfortunately, there are also quite some incidents with AI where things happen that should not have happened. And um, so let me give you an example. Yeah. If you are a financial company and you use artificial intelligence to decide whether you want to give your customers a loan or not. Yeah. And you use a lot of attributes of your customers to calculate in the past which customers have paid back the node and which customers have defaulted on the loan? Yeah, of course you can learn a model about it that tries to predict for new customers whether they will pay you back or whether they will they they will default. Now, if you do it right in a responsible way, then before launching that into the market, you have checked that this model is not discriminating against, let's say, sensitive sensitive attributes like gender.
Richard Benjamin: [00:06:21] Yeah. You don't want to make a difference between giving a loan, whether it's a male or a female person, ethnical origin, political preferences. Well, the whole set of attributes that are actually protected by law and also are usually representing vulnerable groups. So if you do that, everything by design and you avoid those things, then you can give an example of the responsible use of AI. Now there are thousands of examples of incidents with AI. Actually the OECD has an eye incidence monitor database with more than 7000 documented cases. Cases of what has gone wrong with artificial intelligence. A very insightful database to look at because it opens up your mind about what are exactly those problems that, uh, people, uh, suffer. What are vulnerable groups like miners or, uh, or women or whatever in different, uh, circumstances? And also, where does it mostly happen? In what industry and in which countries? So very insightful. If you are interested in knowing more about the responsible use of AI and what happens if you don't do it?
Frankie Carrero: [00:07:34] Yeah, no, you're definitely right. Uh, some, some things are only. Well, we only get get aware or become aware of things when they go wrong. So, uh, it's true, we've all seen many cases in different, different applications that have gone wrong. But you know, when it goes wrong, someone has to take the responsibility for that. So can you tell us, uh, how is it right now? Is there any responsible person or. I don't know, organization, part of the organization when something goes wrong? Or is it something that is still, uh, you know, being, uh, being treated by different legal authorities or entities?
Richard Benjamin: [00:08:13] Are you speaking now in general or for a specific company? Well, in general, as of today, uh, there is there is, uh, hardly any, let's say, uh, official regulation on the use of this technology in an ethical or responsible way. But there are many, uh, let's say, voluntary recommendations, international organizations that recommend. Yeah. So, uh, if you're interested as an organization to be part of that, of course, you can subscribe to the OECD, which has AI principles. Uh, Unesco has its eye ethical recommendation, which is signed by 193 countries in the world. So it's by far the largest global organization that is in favor and stimulating the use of responsible AI by governments, by countries, but also by private sector companies within within those countries. Um, so that's happening in general. Of course, if you go to companies as of today, as I said, there is no regulation yet. But for instance, like Telefonica, we have a methodology for the responsible use of AI by design. And uh, of course, depending on where you are in this journey, to start with it, usually it starts with ESG. So in the part of environment, sustainability and governance, in terms of being a responsible company, requires that you, for instance, take also care of climate change. You look at your supply chain, whether it's fair. You look at all those things and it's part of that.
Richard Benjamin: [00:09:51] You can also look at which is in the case in in Telefonica, also look at the impact of negative impact of artificial intelligence and try to avoid that. Then you also have to do that with your um with your suppliers. Now when companies become more mature, actually it can move out of ESG, which is a statement and voluntary actually into the compulsory part of compliance, where it becomes mandatory within the organization to do it for every system of artificial artificial intelligence that requires to do a risk assessment of each of the systems. If there is low risks, then probably you don't have to do anything. For instance, if I want to recommend a specific movie in our TV, uh, offer like Movistar Plus, then of course, uh, if I make a wrong recommendation, well, the risk on people is hardly nothing. Yeah, but if I am in a different sector, like, uh, medical sector. And I make a recommendation about the serious disease. I diagnose the patients with the serious disease, deciding on a treatment or not a treatment that has a huge impact on people. So therefore, there is a risk assessment that highlights its risk and the risk if there are risks, it comes with certain requirements that you have to take care of before you can go into the market.
Frankie Carrero: [00:11:10] Okay. So then, um, coming back to your role as a chief responsibility officer, what is it that you do at Telefonica? What does your role entail? Is it just strategy, or do you need to be more hands on?
Richard Benjamin: [00:11:24] Well, actually it started in, let's say for 4 or 5 years ago as more strategy. This is something we had to do. So designing uh, what should it be that we do, what should be the next steps, etc.? And now we are actually implementing this across across the group. Yeah. Um, if you want, I can tell you a little bit of the, of the journey that we went through, uh, in Telefonica to become missing.
Frankie Carrero: [00:11:49] Yeah.
Richard Benjamin: [00:11:50] Yeah. So as I said, in 2018, we published, uh, we published our ethical AI principles stating if we use AI, uh, then it should be fair, it should not be discriminated, it should not have bias that it won't want to have should be transparent and explainable. When the use case requires that. I refer back to recommending a movie of doing a diagnosis in the medical sector. Very different. Explainability. Yeah. Uh, requirements. It should be human centered. So always to help humans not substituting them and also with the right autonomy or human intervention, depending on the risks that might be involved. Now the third principle we had is privacy and security, of course. But since this is already standard practice in many large organizations, so you don't have to put a lot of focus on that from the AI perspective, except for the specific AI security and privacy challenges. There are and there are, uh, there are some, even though not too many. And finally, it is with all our partners and third parties and, uh, and joint ventures. So it's not only for ourselves, but also our providers and the organizations we work to. So we started that in 2018, and then we came up with a kind of questionnaire that where we can that we can use to evaluate our AI products. Uh, we tested the questionnaire, uh, in several, uh, in, in several products. We changed it many, many times because it's not easy to measure to some extent, uh, responsibility or ethical use. But in the end, we revised, we tested with a lot of examples.
Richard Benjamin: [00:13:37] And I think then, uh, in 2022, we run actually, we ran a pilot in four business units with a governance model, because in the end you can have principles, but if you don't have an operational model of how you do it, who has to do what? Whose responsibility? What are the roles? And in that pilot, when we identified the governance model with three new roles, one is what we call a responsible AI champion. That's a person that is in any business unit that uses, builds, buys, or sells artificial intelligence systems and is the go to person in case of ethical questions about the use, like a data protection officer for privacy. This is a it's a data protection officer for ethical use of artificial intelligence. We also have a we tested an AI ethics committee, which is an interdisciplinary group of experts who know about human rights, who know about artificial intelligence, about privacy, about data. So it's really a multidisciplinary group. And then we also found the need for what we call an eye office, a kind of coordinating small group of people who push the change through the organization. So we run that in four different business units for one year. And then when we finished, we evaluate it. And we wrote actually our governance model, which we finally approved officially last December. So now we have a governance model which we are currently implementing. So if you said is it more strategy or is it more hands on. Well, it has gone from strategy to actually fully execution, which is very hands on.
Frankie Carrero: [00:15:20] Okay. Uh, Telefonica is a very mature company. You have, uh, well, a long trajectory working with data and AI. So you have everything. But for a company, maybe not so much here in terms of of data and AI, uh, or, uh, or technology, even if they want to, if they want to embrace responsible, I. What would you suggest then? Also starting starting from the strategy point of view. Or do you think that there will be any other path as more simple for them?
Richard Benjamin: [00:15:53] When we started, there were no principles out there that we could select from. Now, now any company can just say, okay, we follow OECD principles or we follow the Unesco principles, and there are lots of tools that come with it that you can use. So from from the part of the what what should your principles be? I think there is a lot of support right now, or at least a lot of information on how to implement it in your organization depends very much on what kind of organization you are. It depends on the sector, but also on your size and whether you actually are a simple user of artificial intelligence, have no knowledge about it, or whether you are a developer. You are a provider of this technology. So it makes a lot of difference where you are. If you are a startup using heavily AI for your products, and then is a different question than if you are just a small or medium enterprise that is using AI for optimizing your business. So different ways to look at it. But at a minimum, I think what companies need to have is one person who is coordinating these aspects that is able to find out what are the systems that are currently in use or planning to be used in the company, and then, uh, even might even be used external help to assess the impact of, uh, those, uh, those systems on, on human rights or the environment or on people.
Richard Benjamin: [00:17:25] Now, notice that this might sound very scary, that, oh, I have to do a lot of checks, but also you have to take into account that most AI applications are harmless. They are simply optimizing something that is already going on since many years, and so that's not an issue at all. But for the high risk, highly, highly risky ones there, you have to make an extra effort to actually if you find anything in advance, try to mitigate it, prevent it, or maybe avoid it rather than doing it once it's it's in the market. Yeah. Now I think for, for many small and medium enterprises, uh, except if they are a very specific sector. Yeah. It will not be a lot of high risk. So it will be little impact. The only thing they have to do is register or understand what systems they have, and do a brief risk assessment. Yeah. Uh, for companies who are in a sector where that is affected, of course they have to do a little bit more work, but that's also for the benefit of society.
Frankie Carrero: [00:18:28] Okay. Yeah. So in some ways, uh, what we're saying is that you have to choose the, the right problems, the right projects to to apply this first in this first steps. Responsible AI and many companies. There are many. Uh, well, they have many things that they want to do. They want to innovate in many different areas. And sometimes we find that those of those most innovative areas are the, the most, uh, well, uh, the most prominent to be ethically, I won't say incorrect, but at least to have some, some, some issues. So how do you think that someone can, can discern between those applications that are the best to to apply this first steps of the responsible AI process, or those applications that can become something, at least we could say dangerous, you know, and also because we are in a moment right now where there are some well, uh, regulations that are going to be displayed in Europe, we don't know exactly what's going to well, how it's going to affect to the companies and to and to the projects that we that we have to develop. So what would be your recommendation in this case, just to go for these simple projects, when you know that you're not going to have any ethical problems? Or do you think that there is a way to approach those other projects that can be ethically problematic?
Richard Benjamin: [00:19:59] Well, first of all, I think I don't think I agree with you. If you say that, uh, the most innovative AI applications are the most, uh, dangerous ones concerning artificial.
Frankie Carrero: [00:20:12] Yeah, sometimes I.
Richard Benjamin: [00:20:14] Think, but I think there is no relation whatsoever. It can be the case. But it's not that the more innovative, the more risk. Yeah, I would say, uh, it has nothing to do, uh, with it. Um, of course, if you, if you, if you want to start using AI, then try to find an area and that is, that is business as usual thinking. Yeah. That has impact on your business and doesn't have a lot of side effects. So if you are a company that starts, I would never recommend starting in the air. Yeah. The human resource or the people area. Yeah. First of all, because if you use AI for hiring, for promoting, for finding the best employee, the best connected, uh, well, first of all, that's high risk in the future AI regulation, which you said. So you might be able to do it, but you have to go through several steps in order to avoid discriminating, because this is all about equal opportunities. And if you use AI without thinking, you might not provide equal opportunities to everybody without even being aware of it. Yeah. So that's the first thing. Second, um, um, if you use AI on employees, you might, uh, create the impression that you're actually tracking them on some attributes and then trying to calculate from that with AI, what are good performers or bad performers? And that is very delicate.
Richard Benjamin: [00:21:39] Yeah, it creates a lot of unrest within organizations, a lot of issues with the working council. And actually in Spain, there is a recommendation from the Ministry of Education. Uh, I think it's the Ministry of Economy that states if you want to use AI in the labor area, then you have to comply with a lot of things already, which is informing the employees, but also informing the working council. So but there are many other areas that you can use that are not affected. If you are a small company or you are in the industrial sector, if you are working with a factory with lots of machinery, of course, predictive maintenance, uh, trying to optimize, uh, logistics within the, in the factory, um, usually that doesn't involve anybody, any people. So you can do that. There's not a problem in that. So, um, um, if you are interested in experimenting with what is it to to make responsible use of AI, then you can take some delegate here, uh, use case, but otherwise I would recommend to start with normal applications that don't have a high risk to get acquainted with it. Get, uh, obtain the business benefits. And when you are a bit mature in that area, then you could try to move on to higher risk applications.
Frankie Carrero: [00:23:04] Okay. So now going a step further, we've seen that introducing the idea of responsible AI into a company is not precisely an easy task. There are ways to to overcome that now, but it's not really an easy task. But besides that, a company as huge as Telefonica works with different industry peers and also works with external partners. So how do you approach a collaboration with those external partners and other other peers to promote responsible AI practices. When? When you work in common projects?
Richard Benjamin: [00:23:37] Yes. So we do that all the time. Yeah. So we've been very vocal on the responsible use of technology in general and AI in specific, but we've always tried to convince others to do the same. Yeah. Uh, for instance, we are uh, we are part we are part of the some of the expert groups of the OECD, uh, who has a lot of AI activity on responsible AI and governance of AI. We collaborate in groups with the World Economic Forum, also on AI governance. We are a co-chairing the Business Council of Unesco AI recommendation, together with Microsoft, that brings together, uh, let's say in the beginning, mature businesses in terms of responsible AI to create material and best practices to share with other industries and other companies. Yeah. Uh, so that is really about working together to promote this and the same, uh, in the GSMa. So SSD organizer of the Mobile World Congress, we are very active in the ethical AI responsible AI part. We have a specific, uh, task. Task force working on that called AI for impact. That is partly focused on on this. And how do you do that across the telecommunication sector where we have a lot of sharing between between peers. Today, it is still the case that there is a differential benefit if you do this because you stand out towards the rest. But the reason for doing this is not to be have a differential, uh, advantage. But is that other companies and, and every company and every public administration is using this technology in a responsible way and not only in a profit seeking way.
Frankie Carrero: [00:25:30] Okay, I have to say that you're also the co-founder of Odyssia, which is an organization that serves as a point of reunion for a lot of, uh, people, but also a lot of companies that are working on this. Also on responsible AI for good. Um, what can you tell us about what Odyssia means and how is it also helping the environment in this case?
Richard Benjamin: [00:25:55] Yes. Thank you for for for the question. So Odyssia is a not for profit, an independent organization that, uh, tries to help, uh, several stakeholders what it means to have an ethical and responsible use of artificial intelligence, uh, through different activities like thought leadership, but also events, uh, reports, research where we reach out to different stakeholders. First, this is for, uh, for organizations in terms of public administrations, but private sector companies who are interested in, uh, in using, uh, artificial intelligence in a responsible way, but also for citizens and society as a whole, because usually in the discussions on the responsible use of AI, uh, it is always a discussion between governments and, uh, and companies, private sector. Of course, the academic world also plays a role here. But what is missing normally are the citizens. And in the end, it's the citizens and societies that are affected by potential negative impacts of this, uh, of this technology. And therefore, it's very important that those citizens are up to speed. They understand to a certain level how this technology worked, how they can be affected. And, uh, that's why we put special focus on civil society and vulnerable groups in terms of this technology. So, um, Odiseo was uh, was founded about four years ago by a group of independent experts, which all have to have some relationship to artificial intelligence. It can be, let's say, university professors, journalists who who write about technology and experts in technology. Um, let's say lawyers. So a very diverse group, but all related to the ethical use of artificial intelligence. And we do interesting products where we help organizations to assess the impact of their applications in terms of responsibility and ethics across a range of sectors such as big tech industry, uh, defense, healthcare, insurance. So all very important sectors that need to, uh, uptake this technology for their benefit, but also need to know how to do that in a responsible way.
Frankie Carrero: [00:28:22] Thank you very much. I'm also a member of this area. I learned from it from from you precisely like three years back. And I've seen it grow and play an important role, uh, in this AI environment the past few days. So now we're reaching the end of the interview, and I would like to ask you about a term that is becoming more, I would say popular the past few months, which is transparency. Um, and the, the way in which companies can ensure transparency in their AI systems because, uh, you know, we need we all we always are. Yeah, we we usually want to know what's the, the the reason behind the decision when it comes to a human. But we would also would like to know what's the reason why a system in this case an AI system, uh, you know, throws a result or predicts something. So now that these days this is this is a very hot topic, what's your take on this? How do you think we can Really confront transparency. And do you think that is possible nowadays?
Frankie Carrero: [00:29:29] Okay.
Richard Benjamin: [00:29:30] Thank you very much indeed. Transparency is a very important question in certain in certain use cases. Yeah. Let me, let me um, let me go one step back and explain why why this is so important. Because, uh, artificial intelligence in the past was just a bunch of complex related rules about certain things. And those rules were clearly understandable by people because they were written by people. So if a system, um, draw a conclusion, come to a conclusion, you could actually trace back what has happened into the system. And you can say, you could say as an expert, well, this makes sense or this doesn't make sense. Now with the upcoming of deep learning, the systems have become so big and there are no rules. It's just layers and numbers and vectors that it's very hard to understand from a human perspective. What? Why is the system saying this person has a disease and this person not. Yeah, um, or any complex task. Why is the system taking proposing this position in chess? Uh, and then it's not understandable by human. By a human. So therefore, uh, there are ways to open those black boxes where you can actually see, uh, why things are happening. And that is very important in, for instance, the medical domain domain. But also if you have, uh, in the legal sector, if you have a judge, uh, adopting a decision from a system that says, well, this person, uh, has better to stay in custody rather than releasing until the date of court.
Richard Benjamin: [00:31:11] Well, this has a huge impact on people. Therefore, if a medical expert or a physician or if a judge or any other risky with impact on people's life. Can not understand why the system recommends one decision or another, then actually, this person should not take into account this information because it could be based on super on the superfluous, uh, relation. Yeah, um, which can be very dangerous. Just an example of how this can be dangerous. If you look at they did a lot of work with deep learning on trying to recognize animals. Yeah. And then they did a test between, uh, wolves and dogs. Yeah. And they trained an algorithm that was perfectly with 99%, uh, able to classify. This is a dog, this is a wolf. And then they asked, show me the pixels. Why you think this is a dog or show me the pixels. Why you think this is a wolf? And it turned out that in the wolf, the pixels that were highlighted, they showed snow. Yeah. And it turned out that most of the pictures with wolves were taken with snow in the background. So the algorithm learned if I see snow, it is a wolf.
Richard Benjamin: [00:32:19] Yeah. Now of course that is untrue. Even though the decision was classified okay. And we don't we don't want people to take over such decisions for an for a not valid reason to take decisions that have impact on people's lives. So transparency is very important for high risk systems. It is not so important for systems that have no high risk. Again, if I want to recognize recommend a song, well why should I understand how that how that how that works? I might be interested why it's recommended to me, but if it recommends the wrong, recommends the wrong song, to me there is no problem whatsoever. It's no impact on my life. Likewise with movies. And so likewise maybe with advertising to some extent. So there are a lot of systems where there is no need for transparency, because if there is no transparency and there is something going wrong, nothing happens. But there are many systems where it is. So people tend to say transparency is very important always, but I would not agree with that. It's very important for high risk systems for in the regulation of the European Commission or the European Union, transparency is compulsory for high risk systems. It's not in general compulsory. Now there is a new transparency requirement with these generative AI systems, these large language models.
Richard Benjamin: [00:33:42] So these are even one step beyond complexity compared to deep learning. And actually they say great things. You can have conversations with it, but understanding how they come to such a coherent, uh, dialogue or conversation or output. Nobody knows. We know the we know how the technology works and what it does, but it's a simple between brackets technology. But by doing it at such a large scale. Yeah, like training for ten months and spending ten millions on electricity of the of the of the machines, the output is surprisingly, uh, impressive. Yeah. Yeah. And the gap between what, for instance, ChatGPT tells me. And understanding why it tells me. Tells me that beyond statistics, there is no explanation for that. So if we use ChatGPT for impactful decisions on people, explainability is a problem and transparency is a problem. And that's why that in the European regulation, there is an obligation on people or companies who use systems like ChatGPT or the very big ones to be able to be transparent. Yeah. No. Transparently how those models have been trained. Yeah. What are the boundaries? How good or how bad are they beforehand? They need to know that before they actually can put the system in the market. If it is high risk, if it's not high risk. You don't have to do anything. You just do it.
Frankie Carrero: [00:35:19] Okay, Richard, thank you very much. You know that. Well, time flies, so we need to conclude this this conversation. It's been really amazing for me to to have you here today and, uh, well, and thank you also for all your insights and for sharing it with our the unwary family, which is the the audience that we have at podcast. So thank you very much. And you will welcome in a future.
Richard Benjamin: [00:35:46] Yes. It was a pleasure to be here, Frankie. Thank you very much and goodbye.
Frankie Carrero: [00:35:50] Thank you so well for everyone. This wraps up today's episode. We'll be back in a couple of weeks with another deep dive into the world of artificial intelligence in in our podcast series. So please stay tuned. And a big thank to you to all of our all of our listeners for joining us on this journey. So thank you very much.
Podcast
The Present, Past, and Future of AI Deployments
Join our host, Chris Brown, COO at Intelygenz, a VASS Company, as he explores the creation, functionalities, and potential impact of AI agents. Special guest Jonás Da Cruz, Co-founder / SVP Head of Engineering & AI Intelygenz, a VASS Company adds valuable insights on how these agents are reshaping industries. Dive deep into the realm of AI evolution and discover the possibilities that lie ahead!
Conversations / The Present, Past, and Future of AI Deployments
Chris Brown: [00:00:06] Welcome back to what is now our third episode in the Unwired series of podcasts from Vegas. Uh, if you're new to the series, my name is Chris Brown. I'm the chief operating officer at Intelygenz, which is a division of VASS company specialized in the deployment of artificial intelligence solutions into production. Today, I'm joined by Jonas de Cruz, someone who's very familiar to me. Jonas is our head of AI and co-founder at Intelygenz. And we're going to take a little trip down memory lane on us, and we're going to use that evolution of AI journey, um, as a vehicle to pick out some key aspects to to help our audience understand what are the the key considerations to successfully deploy AI because times have changed. So welcome.
Jonas de Cruz: [00:00:57] I'm very happy to be here.
Chris Brown: [00:00:59] It's good to have you. So get us kicked off on our little trip down memory lane. It's going to be interesting because there's been some lumps and bumps along the way, right? A little bit, yeah, a little few lumps and bumps along the way. It's definitely when you start any cutting edge or bleeding edge journey. You know it's not going to be not going to be smooth sailing. But can you just get us started today and take us back to where it all began at the new dawn of AI, back in what I think 2014.
Jonas de Cruz: [00:01:28] Actually, when we created the company in 2002, we started just with some AI projects. Wow, very old fashioned AI, but we were, um, going from lab to the production in in just a few months, which is not even today. Something easy to to achieve. But we were creating, uh, we were deploying an installation of uh. Car plates. Um. Recognition? Recognition? Yeah. In a parking lot. And it was working very well. We were working with a totally different techniques than what we are doing today, but it was computer vision and it was AI. The really step forward was in 2014. You were there with us. Yeah, I was.
Chris Brown: [00:02:24] Part of that. Yeah. I remember those TensorFlow days. Keras. Can you talk to us a bit about that? Because that was interesting. We were early right. We were really early.
Jonas de Cruz: [00:02:33] We were we were very early in, in this new AI wave because we were in, in our offices in San Francisco, very close to what everything was happening in 2014. And I think that year Google launched TensorFlow first version.
Chris Brown: [00:02:53] I remember.
Jonas de Cruz: [00:02:54] Um, and we were checking any. If we can use this kind of frameworks to to apply AI in real business. And the problem was not only to have this kind of frameworks and tools, but or lack.
Chris Brown: [00:03:13] Of frameworks and tools.
Jonas de Cruz: [00:03:15] Yeah, yeah, but it was very important that even if you have this kind of of of software that you can use to, to, to make it easier and you don't, you usually don't have access to the kind of computer power that you are going to need. But at that moment, Nvidia was launching new GPUs with something called Cuda cores that matches very well with with some of the maths that you need to to perform when you are using machine learning and deep learning. And it was like, okay, if we invest a few thousand dollars, like 1000 per GPU. So maybe we can have 4 or 5 and it's not a big investment. And and combining this with these frameworks, if you remember we were able to do a beverage recognitions in fridges with with horrible in a cooler with horrible cameras and horrible connections trying to figure out how to horrible everything, horrible everything. But the but some basics were very well established because today we are still using frameworks like TensorFlow, PyTorch or others and we are still using GPUs in a in another way because everything has evolved a lot. But the still there.
Chris Brown: [00:04:42] I have to tell you, I have still one of those GPUs for nostalgia in my garage at home, and one day I'm going to frame it and tell that tell that story of of those of those early days. Um, but we did do the cooler. Um, we did do the cooler project. It was successful. It was tough, right? Let's be honest, it was tough to get to where we wanted to be. It was. I think you could solve that. That problem today. Significantly quicker, easier, more successful right than we did back then. Um, but but we did it. But it was it's that lab to production piece that was always. That was always the sticky piece.
Jonas de Cruz: [00:05:17] I also tell you that, yes, it could be easier today, but not super easy. Some, some parts are are totally still being complex because when we are doing classic software, uh, applications on engineering, you have like a deterministic, uh, workflow or one way pipeline of, of workflow. So it's like, you know, that this step is going to have this, uh, outcome and the next step is going to have this outcome. And if everything is working fine, it will be done and it will be binary of works or doesn't work. But when we are working in AI, the workflow is quite complex, quite more complex. You have.
Chris Brown: [00:06:09] Complex but different.
Jonas de Cruz: [00:06:11] Right? Still being software and there are many software practices that you have to to still applying like TDD and quality assurance. Many things. But the most important thing is you have this workflow. Complex workflow full of steps that are not deterministic are probabilistic or stochastic. So it's totally different. You cannot even when you are a conversation with a customer, you cannot tell, tell him or tell them, and this is going to work or it's not going to work, it's going to work in more or less, or at this level of accuracy or performance and these kind of things. But you cannot say at the beginning if it's going to work or not.
Chris Brown: [00:06:56] So I was just about to bring that up. So it's not that conversation of, you know, I always say to our clients, I spend a long time with our clients, especially front end of projects of when we're doing automation projects, or if we've embarked on some form of automation around the AI, we know it's going to work, right. It might take us a week less or a week more, but but it's going to work, right? We're going to get it. We're going to get the job done. And that whole deterministic piece that goes all a non-deterministic piece of, of the AI, which you're talking at that deep technical level, transcends all the way up the conversation to the client. And, and I think that's the first pick out of this journey that I would take of, you know, this even the initial conversation, the initial certainty of where you're going with AI, you can't you can't be super sure it's going to work. Right. You just you don't know that it's going to be.
Jonas de Cruz: [00:07:47] It's going to work at a certain level.
Chris Brown: [00:07:48] It will work. But enough to hit the business case level or the or to satisfy the need of that process or operation you're trying to achieve. You you have to have that conversation early with your clients first day.
Jonas de Cruz: [00:08:01] You need to have a clear ROI, or maybe you are going to fail, even if you are 99% accurate. We have done several times and they needed 99.99 and it was almost impossible. And it wasn't possible.
Chris Brown: [00:08:15] Well, not just the not just the level of accuracy. We've done projects where we've changed our processes, right, based on the back of some significant learning, on projects where we've gone into a project assuming that if we're successful and assuming there's the there's the danger word right there, right. Assuming that if we're successful, then this thing's going to go to production and change the world for this, for this client. And we execute the project. It's technically successful. It achieves the classification, the prediction, the outcome that the client is looking for beyond the expectation of accuracy. And they take the solution and say, I don't know how. I don't know what to do with that. I haven't thought about how I'm going to implement that with the flows and the rest of my business and, and how am I going to absorb this new knowledge into my business? So we've even learned from that process as well.
Jonas de Cruz: [00:09:09] Also, the people involved on the current processes are going to be impacted by this new because AI is not only, I'm going to say replacing not humans, but some types of humans are performing and they are replacing very specific tasks but not cognitive tasks. And now with AI you have you can replace or improve some cognitive tasks that that humans are doing. So as an organization, you you need to know what is going to be the process. When I'm going to put this AI improving this, um, operation that I'm doing here or whatever. Uh. What next? So. Because maybe you are not ready. Actually, we have we have seen several times that they are not ready.
Chris Brown: [00:10:05] Ready from from a change management point of view. You know, I used to say not even that long ago, I used to say to our clients, you know, there was a lot of talk about AI, early hype, not maybe not 2014, but early right of AI is going to replace all all of these, these, these human jobs. We weren't even going to talk about this today. By the way. We've already I knew this would happen. We've already gone somewhere else. But we'll be fine, right? Hold tight. We'll be okay. The. So we. I used to talk to clients and say, you know, it's not it's not AI that's going to replace jobs. It's the automation. So we we've always we've always automated around the AI. We've put in we historically we came from an automation background and morphed into that AI very early. But we morphed into that AI company. So. So I was always of the opinion like, AI is not going to replace the jobs. It's the automation that takes the jobs away. And now just look, that doesn't hold. That actually doesn't hold true anymore. And I know we're not going down down that avenue today, but that's how fast this whole journey is moving and how fast the, the the whole technology is moving on.
Jonas de Cruz: [00:11:14] Yeah. The, um, one thing that is almost impossible today to to try to figure out where we are going with, with this is that if you remember, a few years ago, we were all saying that taxi drivers are are done machines. Ai is going to take their jobs. No, no, it's software engineers are in danger right now because it's going much faster. The that part. So we don't know what is going to happen.
Chris Brown: [00:11:44] Yeah. So so I guess what your the point you're making there is you know Tesla launched their their autopilot and that was it. The whole no one's driving again in the next year. Elon is good at that, to be fair. And the next year I mean, Cybertruck is in the next. No, no. Right. So he's amazing. No. Yeah he's fantastic. Right. And he does some incredible, incredible things. But it didn't happen. It hasn't happened to date. It's a very very hard problem right. The edge cases are incredibly difficult. The variances across different worlds roads of different countries roads etc. it's a difficult difficult problem. And the point you're making is but in the software world that things just shut off and where all the software engineers were sitting all smug about the taxi drivers losing their job, well, hold on a second. I don't think the tables are turning, and.
Jonas de Cruz: [00:12:34] I don't think we can figure out who is next because I don't know who is next. Maybe people in audio and video industry or who writers, I don't know. We don't.
Chris Brown: [00:12:47] Know. Right? We don't know. Um, look, we haven't even got out of our 2014 bumpy road. Tensorflow trying to get things into production with no frameworks, no tools or not. Great frameworks and tools, certainly to what we've got today. Um, let's let's try and move this forward. So I know I took us off on a little tangent there that we weren't meant to go. I'll take the blame for that one. But, um, what's changed? So take us, take us a little bit through through those, those earlier years and the progression that we've that we've made. What's changed since then?
Jonas de Cruz: [00:13:20] Um, so I think to two main things that has happened, technically speaking. Um, if you remember, when we started with AI, it was super clear that you have like two big fields in AI that you can solve applying supervised learning or unsupervised learning. You have this other reinforcement learning that is a little bit more complex, but in the business, in the real world of business. Reinforcement learning probably is not.
Chris Brown: [00:13:50] They're just talking. In the last episode, when I was talking with Chris Rivas about working with contingent on some of their unsupervised learning, which we don't get to do a huge amount of, and how that was, that was pretty exciting. Sorry. No, no. Yeah.
Jonas de Cruz: [00:14:03] So the point is, uh, something happens. A few years ago when we started to learn how to apply something called semi-supervised learning. So imagine that you don't need, uh, when we are working in supervised learning, you need tons of data and all perfectly labeled. Beautifully labeled. Exactly. Imagine that. You only need a few labels. Then you can create different with different approaches, different systems that can learn just for how to label. Then the other system already has this label. So it can it can learn by itself more or less. So there are two big areas there that is self-supervised, which is all large, um, large language models are based on this and you have tons of text. So if you reduce the problem from an from an analytical perspective, the problem is only to try to predict the next word in a sentence. Because you have millions and billions of sentences, you need to label almost anything. You have other problems, but you don't need to label because you have okay, the next word. I know this is what I'm looking for. What the model is saying is that similar? It's not similar. So you have to solve the problem. And the other interesting field is the adversarial nets that you have two different models. One is generating samples trying to to and the other model is trying to to classify or identify what what the model is trying to if it's it's called well, if it's trying to, how do you say to and don't give you the truth.
Chris Brown: [00:16:06] Yeah.
Jonas de Cruz: [00:16:07] Hallucinations or. No no, no, it's not a hallucination in general. It's like I'm trying to I'm trying.
Chris Brown: [00:16:14] To I'm trying to catch you out.
Jonas de Cruz: [00:16:15] What? You're trying to say sorry. My English.
Chris Brown: [00:16:18] No, no. Try me in Spanish if you want. Yeah. Don't try me in Spanish.
Jonas de Cruz: [00:16:22] Well, next. Next episode. Maybe.
Chris Brown: [00:16:25] I said that years ago when I was. It's never happened. So let's not go off on another tangent.
Jonas de Cruz: [00:16:29] So these two, these two areas, everything that is happening today is related with these two things adversarial nets, self-supervised. And the other thing is okay, but we need like an architecture capable to get these new areas of learning. A and be able to to really to really work. Because if you remember, we usually have very, not very complex architectures when we are talking models. So Google in 2017 published attention is all you need, which is Transformers. I'm sure that you know, what are Transformers. So they probably published their discovery and everything changed since then. Until now. It's actually everything I think is the most important thing that has changed everything because the basics are the same. The if you are talking about neural networks, the artificial perceptron that almost any architecture is using is almost the same that we already have some time ago, so nothing base basic is change.
Chris Brown: [00:17:45] Well, let's explore that a little bit. What hasn't? So we've gone on this journey right. We know we started with like any technology journey, it's evolved and it is evolving fast and it's still evolving fast. We know that, right? Um, what hasn't changed? What hasn't changed since those early days that we've got to, you know, that you would that that you would pick out to say, look, I knew this we knew this episode was going to be a little bit more technical. And so let's stick on that. Technically. What what hasn't changed? We talked about change management. That wasn't a technical aspect, but but it implements the technical solution. Talk. Talk to us a little bit about that.
Jonas de Cruz: [00:18:20] So I can give you an example that is happening today. So the biggest challenge we have right now in our project is not the amount of data that we need, the type of models or the the amount of GPUs we need is, um, how we can make sure that we are that this is working properly in Berlin production because we have like millions of different data points coming in from the real world. We were in the lab. Now we are in the real world and we need to ensure that in the real world, your solution is working is.
Chris Brown: [00:18:59] What we call the chasm, right? Exactly. It hasn't gone.
Jonas de Cruz: [00:19:02] No, no, because it's it's case has its own its own problems. And the other thing I think is we were, um, maybe we mentioned before the human management. So when we are introducing these kind of things, humans that are. In this case, what we are doing is automate the decision of some transaction in a, in a financial institution if some transaction is fraud or not. Uh, they have like 1000 rules. And now we have like a black box machine that is, uh, performing better than the rules, so they want to know how we implement this in our day to day, because maybe it's not just a replacement. Maybe it's working together and and having some, um, feedback loop that is going to give you wisdom in some.
Chris Brown: [00:19:59] It brings to it brings to my mind I remember a project not so long ago where we're implementing a it's different to the fraud case. Right. But it's the I'm talking about the same concept, the how do you get the human out of the loop. Right. How do you give confidence to get human out of the loop? I remember a project we did which was about automating the operations of a telecoms network. So they're looking at alarms, failure points, telecom network. And we we called it next best action. And we would we would predict the next best action. And then we'd perform the next action if it was an automated task that we could perform. And, you know, we we produced an incredible solution. And that's not the point of the story, right? It's that great. We've done many incredible solutions, but and highly accurate. It was more accurate than we ever imagined it would be. It was more accurate than we established. What the boundary line was of this thing should go live. And it took us 12 months to get us to get humans out of the loop. It took 12 months, even with that level of results, to convince an operations team to step back and let it run. And, you know, we ran in. We hadn't actually it was a little while back. We hadn't coined the phrase human in the loop. I hadn't heard it, but we were calling it a suggestion mode at the time. Remember, it was exactly. It would suggest it, and it would only do it if someone clicked. Yes. And and, you know, 99.9% of the time they're clicking yes. And you're thinking, why not just automate it? Yes. Right. But that took 12 months. Yeah. It took 12 months. Do you remember that one?
Jonas de Cruz: [00:21:40] That is normal.
Chris Brown: [00:21:42] Ah, so. Okay, I want to go. We've done past. I have no idea where we are on time, by the way. So if someone can help me out a little bit on time, that would be. That would be amazing. Um, so the we've done past, we've done the bumpy road, we've done some of the pick outs of what's changed, what hasn't changed. You know, the chasm is still there. The change management is still there. Yes. The technology frameworks have got a little bit better. The models have got significantly better. We can you know, we've got consume models as a service now that can take a little bit of that load off. We talked all about that. How about we talk a little bit about what's coming. Right. Because very recently there was some some big announcements around agents. What. Can you talk to us a little bit about. What should we expect from agents.
Jonas de Cruz: [00:22:30] So I'm going to use the example that you were explaining. So in the moment that, uh, this next best action is going to perform the action or do something instead of just suggesting to do something. It will turn into an agent. So an AI agent is capable to perform actions in an environment. The environment can be physical or digital. And the other characteristic that is going to change many things in the in, in, in the world is that they can learn self, learn from the environment and from their own trial and error things in in the environment. So what we have done in, in this section was a mixing between a real agent and a not not full AI agent, but it was part of it because it can perform actions in the real world. So the things are going to change. The only thing we.
Chris Brown: [00:23:35] Didn't, we didn't call it an agent as normal were ahead of our time. Right. But we called it the AI decision making, and we built automation around that decision making in order to take the action. We didn't call it it. So a combination thing being an agent. Yeah, because.
Jonas de Cruz: [00:23:52] We usually are not talking in business. We are not talking about agents. But behind the curtains we are talking about agents all the time. Agents is not something new. What is new is that we can we are creating agents from current knowledge. And you don't need to code to do it. This is what, for instance, OpenAI presented the other day in the keynote. And you can create your own GPT, and this own GPT can learn about your own data and your own configuration. And also you can A automate actions. So in certain moments, the agent can trigger an action that you have set up, that an action that can be performed at some point. So in the moment that you have this well configured, they can do many things that before we needed to to integrate many different software and these kind of things. But agents are going to be the most difficult thing about agents is to the same that that before how we are going to know if this is performing properly or not, because they are going to do real actions. So it's not just suggestions or predictions. No, they are going to do actions. So you need to be careful about if they are performing the right actions or not.
Chris Brown: [00:25:33] I didn't know where we were going to get in this in this conversation. I said we went off in a couple of tangents as well. But what's really interesting, what I'm going to, what I'm going to take away from this conversation, and I hope other other people that are listening can can draw this conclusion on maybe some other conclusions as well, hopefully. But the the technology has changed. No surprise. Right? The capabilities have changed and the technology has changed. The frameworks have changed. The models have changed. Right? Things are evolving. You talked about adversarial neural networks talk, you know agents. Well, what hasn't changed is the chasm. The chasm hasn't changed. And the change management hasn't changed. And that is is is telling, right, that they look like they could be here to stay unless someone really starts to put some work in. I think the change management. Humans are humans. I don't know if you'll get past change management. I don't know if we're going to start to see anything making the chasm smaller.
Jonas de Cruz: [00:26:29] So I didn't say it has not changed, but the changing is quite slower than than everything Yes. So, yes, it's, uh. So some some things are happening to improve. Uh, the this part there are two to, I can say, tools that can help on this kind of things, which are the feature stores, which is something that is being very important right now because you have features already calculated and it's easier to to have the same information that is happening in the lab and in the real world. Is everything in the same in the same place. So it's better to it's easier to to analyze and work with it. And the other very important thing is they are orchestrators, which is giving you the. Do you remember when we were talking about the the complexity of this AI software workflows that are much more complex. So an orchestration orchestrator is capable to manage all these situations. And to be able to create a bridge to, to to pass through the.
Chris Brown: [00:27:51] Not a whole bridge across the chasm, but at least it's giving you a chance to jump across it, right? Because it's not. It's not so wide anymore.
Jonas de Cruz: [00:27:57] Like create a small bridge between all the assets that we have. Because there are many customers, it's not only one, but there are two. These two things that are very recently trendy is are important.
Chris Brown: [00:28:12] I'm going to put you on the spot a little bit. Right. And this is this possibly might be a little bit unfair because I'm going to ask you about a prediction and okay. When when predictions are recorded sometimes they don't age well. Right. So danger.
Jonas de Cruz: [00:28:25] Danger.
Chris Brown: [00:28:26] Danger zone time right on us. But, um, how do you think the consumption of agents is going to happen? So I'm just trying to tie this back. In the first episode, I was here with Frankie a few weeks back and we talked about, you know, as we do, the different ways of consuming AI, how do you think agents are going to be consumed? Do you see a marketplace of agents that are going to be consumer like software as a service? Do you see them as you know, they're accelerators, but they're still going to be customized. How do is are they are they the equivalent of no code, low code automation? What does that world look like do you think for agents?
Jonas de Cruz: [00:29:00] So in terms of consumption, all of you mentioned you are going to have marketplaces. You are going to have more open or closed marketplaces. Maybe you are in in Salesforce, and Salesforce is going to launch their own different agents inside Salesforce to do things same that any other, uh, big, big company today. But there are going to be other companies like OpenAI launching their own marketplace that anyone can can launch their own public the their own AI agent. What is more interesting is how this is going to be created because, um. Openai has launched a way to create their own agents, fully known code, so you can create it just talking with the with the system, then you maybe not. You still need to to have some knowledge about, um, APIs because maybe you want to perform some action out of OpenAI. You need some API connectivity or whatever. So some technical knowledge you need, but it's going to be more and more no code. So software engineers are going to be useless at some point in the future. I can say sorry about that, but no, it's going to be a long time, but not useless, but are going to be less and less useful or needed So a lot of of of things are going to happen.
Chris Brown: [00:30:37] It's combining things back to that point where we're all sitting as software engineers, a little bit smug of, oh, we're all right. You know, feel a little bit sorry that taxi drivers aren't, you know, this self-driving is going to take over. But here we are, right? We're going back to that comment we made earlier. That's the reality of the world right now is software developers are.
Jonas de Cruz: [00:30:55] But it's not only software developers. It's everything. What is happening today here? It could be happening or not. Because you know that we can create now people that doesn't exist. They can, uh, talk about some topic for, for hours. Uh, they can speak many languages.
Chris Brown: [00:31:20] The I filibuster.
Jonas de Cruz: [00:31:22] Right. Exactly. And they can do all of these things because, uh, you ask it before things that are changed more and I miss to tell about generative AI because there is something unique in generative AI.
Chris Brown: [00:31:40] I try to get through an episode without talking about generative AI, but we're not going to make it.
Jonas de Cruz: [00:31:44] So it's going to be short. But I'm always thinking about, in the history of the universe, more or less the only form of life that we knew that are capable to generate new content and novel content. It was humans. And now, since a couple of years ago or one year ago, there is another thing that is capable to create new content, content that never existed before. And this is happening. So we are living in this historical moment, and I don't think we realize that this has been a really is going to change everything.
Chris Brown: [00:32:30] And on that note, I'm going to draws to a close on us. Thank you once again for taking time on your super busy. Thank you for taking your time to come to the to the podcast room and talk to us about all those technical changes. Um, it's been fascinating, even the tangents that that I took us off. It's been great. So really, thank you for thank you for being here.
Jonas de Cruz: [00:32:51] Thank you. Happy.
Chris Brown: [00:32:52] And yeah, we'll call that a wrap. But just before we go, I will just put a call out to our next episode of the podcast series where Frankie will be your host. So Frankie's coming back from episode one to host. Host our fourth or fourth episode. In a couple of weeks, he's going to be joined by Javier Yuste, the CTO of Zurich Insurance. So that's going to be back to a an industry focused conversation there. So we're moving all over the map on on this eye on this on this eye conversation, like the multi-dimensional subject that it is. But once again, thank you Jonas. Thank you.
Podcast
AI Horizons: Foundations, Adaption, Governance
Exploring AI Horizons on "The UnwAIred" Podcast! Join Frankie Carrero, Director of Data and AI at VASS, as he delves into the foundations, adaptation, and governance of artificial intelligence. Special guest Javier Yuste, Chief Data Officer at Zurich Insurance Spain, shares insights on leveraging AI for streamlined processes and enhanced customer experiences. Tune in for a deep dive into the evolving landscape of AI!
Conversations / AI Horizons: Foundations, Adaption, Governance
Frankie Carrero: [00:00:06] Hello there and welcome to another exciting episode of our podcast series, the Unwaired. I'm Frankie Carrero, I'm the director of data and artificial intelligence at VASS. And for those of you who don't know, VASS is a leading digital solutions company present in more than 25 countries all over the world, Europe, Americas and Asia. And we help large companies in dealing with their digital transformation processes, developing and executing the most innovative and scalable projects from strategy to operations. But today, we are here to speak about artificial intelligence. We want to explore the different understandings that there are about artificial intelligence, where, well, there's no official consistency. We have different opinions and not only us, but in the sector, in the even in the science. So we want to speak about what it is and the evolution that we see for artificial intelligence. We're going to explore from different points of view. And the first of the first one is talking about the LMS because, well, we know that there's a lot that have been said lately about LMS, about generative AI, and I think that it could be the starting point today for our interview. I have the pleasure to to have here with us today have reduced who is the chief data officer of Zurich Insurance in Spain? And, well, those of you, I'm sure that you know that Zurich is a leading multi multi-line insurance company. Uh, they serve people in business and businesses all over the world in more than 200 countries, if I'm correct. And it was founded like 150 years ago. So it's one of the most ancient companies I would say, at least in interest. Is that correct?
Javier Yuste: [00:01:56] Yeah. I mean, we've been providing insurance initially in Switzerland, and after that, because the company was really successful, they were buying different companies in different countries. And in Spain it has been operating for over 120 years.
Frankie Carrero: [00:02:16] Well, this is awesome really. But anyway, we have like I was saying, we have never used it today to to talk with us about this. So welcome, Javier.
Javier Yuste: [00:02:25] Thank you very much. I'm very happy to be here and well, really excited about talking about artificial intelligence.
Frankie Carrero: [00:02:34] Okay. Well, we're, uh, we're really. It's a pleasure to have you here because we know of your trajectory. Uh, you are the CTO of a big company doing not only artificial intelligence, but also some of the things related to AI governance related to data governance. And we're going to talk a little bit about about them here today. But, uh, first thing. How can you please explain to us what is it that you do right now at your role at Zurich?
Javier Yuste: [00:03:03] Okay, so I lead a team that really manages the data within Zurich Insurance Spain. So we warranty from the data quality, from all the data we gather from customers regarding policies, claims and all the different business activities. And with that data, we are also billed, you may say, artificial intelligence algorithms. You may say machine learning, but at the end of the day, what we look forward is to to automate processes and to make things easier for our customers to be more streamlined and to make them faster for them. So in that sense, we I manage the data quality part. I also manage the data governance part. And that's also key when it comes to to artificial intelligence because at the end of the day, you may build algorithms, but you need to to guarantee that they work properly and they provide the the output you are expecting. And you don't have any bias regarding how they treat your customers or how they treat different products or different business lines. So at the end of the day, I would say I manage the data ecosystem and and we are a key part at the center of the company in order to provide solutions to, to improve the the speed in which we manage claims to facilitate the onboarding process for new customers or new policies, and also to to support our back offices and all the processes within the company.
Frankie Carrero: [00:04:42] It's really interesting what you said about data, because not everybody is aware of that. But you cannot take advantage of algorithms if you don't have the right data. You can construct models that will solve some of the issues that you have with the company, but if the data is not correct, it isn't going to work properly. And you probably make bad decisions, for instance, because you don't have the right information or, well, your systems will work, uh, will not work well. So that's something important. And it brings me to the concept of LMS, to the concept of generative AI, where we have models that have been constructed with a lot of data from many different places, in most cases all over the world, uh, text downloaded from the internet and some, some other sources, images also downloaded from the internet and used to train models like for instance, Dall-E or Midjourney in the case of ChatGPT is text. Uh, so this, this is, this has been like the, the, the thing that has boosted artificial intelligence for the past year. So we know right now we know that much about artificial intelligence because a lot of things have happened since. Well, ChatGPT was launched a year ago. So what's your take on generative AI? What do you think that it's been so important?
Javier Yuste: [00:06:05] Well, I would I would say that, uh, the most important thing is that the common people already have been aware of the artificial intelligence, that this is something that is here, and it is here now. Yeah. I think people were more when you say artificial intelligence, uh, I think people were more thinking about Star Trek or something like that rather than, than the present moment of time. So, so everyone is aware. Everyone can play with it. And that's another difference. It's not something that you have in a big company. It's not something that a government is doing. It's something you can get into ChatGPT and plan your holidays and compare cars and ask whatever you want to ask and you get an answer. And here the problem is also to know if that answer is accurate or not. And that's the main problem. I mean, you may be studying at university and you get an essay, you may go through it and you get the the first draft or the paper you want to to, to hand to your professor. But at the end of the day, you may have data that is inaccurate and it may be it may be not accurate because you have different sources of data and it depends the data used to train the model.
Javier Yuste: [00:07:32] And here we get to the data quality issue. If you Google internet you may find different different answers for for the same question. And you may end up in a forum that says something and in another one that says the opposite. So at the end of the day, I would say for for artificial intelligence, that now we are in that moment in which we need to, to to start to think how we use it, how we govern it. But on the other hand, uh, we are somehow competing to, to take the advantage of being able to automate some processes, to be able to, to talk with your customers in, in a more responsive way. I mean, chatbots have been here for years, but but they are limited to to some scope. You are talking in insurance about a claim. You are talking about other areas in in different companies. Okay, I want to book a flight. So maybe I'm talking with a chatbot that that helps me out. But at the end of the day, let's say the the feeling I'm talking to a machine was really clear. And now it's becoming a bit, a bit diffuse among.
Javier Yuste: [00:08:50] Okay, I'm talking to a machine. Not, I may ask, whatever I want. It's not something that that you need to go through a, let's say a guideline or press five or press two is okay. You make an open question, you may get an answer. But the difficult thing here is to to know if that's the answer you were looking for and how accurate it is. For instance, we don't use artificial intelligence to talk directly to customers because we need to to be sure that they're going to get accurate responses. So we prefer they go. They may go through a process talking to a chatbot, but that has been tested and and and has the right answers for specific questions. For instance regarding a claim. I mean, you can use WhatsApp to to text us and you'll get a chatbot replying to you, but or you may talk to to a person, but in the next few years with the proper, let's say, governance framework on top of it. I'm sure this will be progress and you'll be able to to, let's say, have a more direct conversation with the artificial intelligence algorithms within any company, regardless of the sector.
Frankie Carrero: [00:10:07] Okay. We'll dive deeper into AI governance later. So I'll I'll take what you were saying about generative AI and what generative AI is, something that has come recently, in the past 2 or 3 years. Uh, but before that, we were already doing artificial intelligence, and the hot topic was machine learning. For instance, deep learning as well. Um, how would you define artificial intelligence in a broader context than llms beyond generative models?
Javier Yuste: [00:10:37] Well, I would say artificial intelligence is and it doesn't matter what is the algorithm or if that's just a few rules. Walls. But but whatever artifact you have in place that can make automated decisions or suggestions in the sense that, okay, so if you make a question of if you are during a process or you are quoting, let's say an insurance policy, if I want to recommend what fits better to to your vehicle, maybe depending on the age, it makes sense to have full coverage or not. And if you have an old car, maybe full coverage doesn't fit. That can be just a rule. Okay, if it's, I don't know, eight, ten years old, your car, maybe you're going to pay too much. And in the case of a total loss, you won't get that much money. So probably it doesn't. It isn't the best product. So in that sense, I would say whatever algorithm, if you think about I'm sorry for going back to regulation, but I act is an act that's going to be issued by the European Union to, to, let's say, put a framework of governance on top of artificial intelligence. And here that even linear regressions are in scope. So it's not that you have the most trendiest the latest technology. It's also about the process. So uh and I'll give you another example. In the insurance industry we've been doing call it data science called data mining. Call it models. It doesn't matter. Call it artificial intelligence. But for pricing, we've been doing models for eight years. I mean, for for the last, I don't know, 60 years we've been building models to, to really know the probability of having a claim and the severity of that claim, so that we may build the price, being sure that we'll have enough money to, to pay out the customer whatever loss they have and in the market.
Javier Yuste: [00:12:50] So pricing models are also affected by by this act. But they've been around for over 60 years. So in that sense I would say that talking about what's the definition or if this is artificial intelligence or not, it's not a really productive conversation in the sense that it doesn't really matter, because at the end of the day, what matters is what you get from it and you as a company and your customer from it. So and and being thinking, okay, if I use this algorithm, it may be, it may not be. I don't think that's that's really adds anything on the table. At the end of the day, it's about streamlining your processes, provide a better customer service to get to know your customer and provide better products. And and being able to to, let's say, answer whatever inquiry or question your customer may have. So, so I would say artificial intelligence is something that really is going to be a helping us to, to streamline and to improve our processes. But I think it's more about the process itself than than the artificial intelligence, because at the end of the day, if you don't have a process to plug in the artificial intelligence, the customer won't see any any change, any difference.
Frankie Carrero: [00:14:16] Yeah I agree. At the end of the day, most of the processes that we are where we are applying artificial intelligence right now, we're trying to be improved with traditional algorithms. They are good sometimes, but they don't. They don't cover all the solutions that you need, and sometimes they don't bring the best solutions to the to the table. So that's where artificial intelligence and I agree with you. Whatever algorithm we're speaking about, They can help. They can help because they can do things better than how we're doing them. Traditionally, or at least before these past 2 or 3 years. But I've seen you have applied artificial intelligence to many projects in your company. You said that before, but how how do you decide which projects are to be faced with artificial intelligence and which ones have to be discarded?
Javier Yuste: [00:15:09] Well, I would say before we get there, what we ask ourselves is okay, what can be improved and will have significant, let's say, value. And you may call value many things. We may be talking about money, but but not only we may be talking about the time we need to to process a claim and and pay the customer. You may think about how many hours people is within the company is using to do manual task or non-value-added task and so on. So once we, let's say, identify those problems we want to tackle, we need to decide how and in the how. Artificial intelligence is another piece, but not the only one. And so here you have a trade off between the, the viability of this technique or that other technique. Or if we forget about data and algorithms, doing something maybe easier or maybe change the whole process. And that would be a traditional, uh, process change project. And from that we do a business case with the different options. There are some things, some of those that you may know at the beginning that are going nowhere, and then you have that cost benefit, uh, let's say trade off and you decide What you want to do. But. But it's not doing artificial intelligence. Intelligence for the sake of doing it.
Javier Yuste: [00:16:49] It's more about okay, so the final product. And that has to do with many areas, not with mine. I would say in any project we do, we have 2 or 3 data scientists, but probably we have something between 8 to 20 people working on it because at the end of the day, maybe you need a new development on a system. You need the some some final users to test it. You need to to get all the requirements are what do you want to to accomplish from the business user and so on. So we are a piece, I would say a sexy one in the sense that there's a hype about artificial intelligence. What we do is some somehow new or at least perceived as new, event. So we've been doing this for years. I mean, when when I started to work on with data, uh, it was called data mining. Now it's called more data science. And now data science is getting behind and we talk about artificial intelligence. But at the end of the day, all these algorithms were written in paper in many cases. I don't know, during the 60s or the 50s, but there was no machine to to really implement those. So we are now implementing what our grandfathers designed.
Frankie Carrero: [00:18:16] And do you think that all these projects that in some sense we we're always trying to get to or trying to become a data driven company or something that we aspire to be. So in some sense, do you think that the organizations have been re adapting and trying to to create a new a new workforce to try to tackle this, this data driven thing that we want to get, we want to become. Or do you think that is still something that we are behind? So, I mean, do you see that companies are beginning to group their processes around data processing, or do you think that it's still something that is not being considered or not applied?
Javier Yuste: [00:19:00] Well, I think everyone is talking about their data science team, their new chief data officer, many different names. And and I would say every company is doing something. But if you think about getting data centric, if you just think about, okay, I'll do some projects, I'll hire a few people or some consultancy companies like yours to to help me out. Okay, for now for sure you will improve some processes, but you won't be data driven because you need to taint the way the company as a whole operates. I mean, it's good to do projects to improve things, and probably you get a big payoff from that. But at the end of the day, you need to to to start thinking. I would say design what you do, thinking on the data first, not thinking about, okay, I'm going to do that process. And whatever data we may get from that will improve the same process. Well, but if you put an algorithm there, maybe that's not the process. If you really have a system that can automate some tasks, maybe your back office can do some other stuff, and you may start to to also improve in some areas and getting more capacity for others.
Javier Yuste: [00:20:24] You are going to improve later on. So in that sense, I would say we are in the way and we do with we, I mean every, every company now, because at the end of the day, we are more data driven. We have digital channels. So so the ways you communicate with the customer really changes everything. But but there are many digital channels that from several companies that end up with someone taking something on a computer and manually sending an email back. Well, that's not digitalizing the process that not being data centric, you need to to think about the whole value chain and to change it. And that takes time, because at the end of the day, every company has legacy. Legacy meaning the systems you've been using and everything is changing so fast that it's difficult to to keep pace. But but you need to stay focused on on accelerating because if not, someone is going to win and that's not going to be you to lag behind.
Frankie Carrero: [00:21:32] So we have uh, we were trying to finalize the digital transformation process, and now we've entered into a data transformation process or something like that because, um, well, that's the next step. And we are still struggling with with that. Is that what what I understood from your from your words.
Javier Yuste: [00:21:53] Well, uh, what I was meaning is it's not about hiring a bunch of smart people to do some really nice projects to to get a prize in some contest. It's more about changing the way everyone thinks about the business and thinking about. Okay, but. But how we are going to manage the data we need, how we are going to get that data and how we are going to make things better for our customers. And when you start to think on that way, then you start designing things that way in the sense that that you think data driven, but but not because you improve some process. Maybe your core process, but that's something somehow external to the way you operate on a daily basis. It has to be on the center and the, at the company. And and that comes from the CEO in first person. Yeah.
Frankie Carrero: [00:22:52] Totally agree. Anyway, this these changes are bringing some other some new risks and liabilities that we didn't have before. And you already said something related to that at the at the very beginning. Ai governance is one of the I wouldn't say I would say techniques that we can use, but we will have to use to, to try to, to correct these things. And how do you think that organizations have to operate to try to, uh, to try to adapt to this ethically, uh, responsibilities that they have and also social responsibilities that they have just because they are applying artificial intelligence. And this is related to the European Act that you were saying before.
Javier Yuste: [00:23:37] Yeah. But at the end of the day, the European Act is just, I would say, the regulatory view about the minimum you need to do to be to be compliant with the law. And if not, you may get a fine and so on. But but you need to go further than that. I mean, in Zurich we, we have the data commitment that it has several points. But but at the end of the day what it says is, okay, we won't share your data with a third party and we will use your data to, to to make things better for you as a customer. So we want to use your data to, to, to make money or some other things that, that maybe is the business model of, of other companies. And with the act, what you need is to be sure that, that your models, uh, Well, I'm not racist and they don't have bias that that really affects the customers. But it doesn't mean you are doing a model the customer really want to do. I mean, from GDPR, he kind of posed to, to, to, to get his or her data used to, to build a model to be scored by, by many models and so on.
Javier Yuste: [00:24:54] But at the end of the day, I would say I act will be the minimum standards and the companies that really focus on what the customer wants or don't want to to to get done with their data also are going to be recognized by them as, okay, I trust this company because what what they do with my data is the right thing instead of, I don't know, being tracking me the whole day with the app or some other stuff that that many people give away really easily because they get some service on their phone and so on. But at the end of the day, it's it's really private information where you are when you are. So Google tells you, okay, do you want to go home because it's this time you are at work and I know you. Probably you are going home because at this time of the day you go home usually. But okay, that's that's okay. You get the map. That's that's, uh, makes your life easier. But but you should be aware that that that information is being shared. And you should decide freely if you want that to, to happen that way. Or do you want it to, to happen a different way.
Frankie Carrero: [00:26:05] And since we are using some applications that are not located in Europe, I mean, most of the services that we use or many of the services that we use are located in the United States. I won't speak about other, other locations in the world because we don't really interact with them, or at least not that much. Uh, do you think that this, this game that we need to play using some services in Europe, some services in the US, Different regulation. Do you think that it can impact the way that a company can apply all these rules?
Javier Yuste: [00:26:38] For sure. I mean, at the end of the day, at least in Zurich, we have all the data of our customers within the European Union. If that country is part of it. But we are going to have different legislations in different territories. I would say we are usually more restrictive in Europe than than in other areas of the world. And that may, may impact about the things we may do here, but that also protects the people and protects their data. So it's true. We used to be somehow the the more, let's say individual rights protection in the sense that that you have the right to manage your data and so on. But that's something that I think it will be implemented all around the world in the next few years, because if not, governments are going to lose control of what's going on with the different algorithms, the data from the people and so on. So you get a big IT companies getting huge fines from the European Union and changing their policies and so on, and adapting them to Europe. You get Elon Musk saying he's going to close X in Europe and that's an option. Okay. We don't know how this will progress. But but at the end of the day you get some convergence. I mean, if you think about cars beginning of 20th century, whoever was making cars for tea and so on, there was no, no legislation for that. And now it doesn't matter where you travel around the world, you get the same manufacturers making cars for all the world. Maybe when it comes to pollution in some countries or in some continents, you get some lower standards. But at the end of the day you get kind of the same cars. Maybe in some poor countries you don't have airbags or something to to lower the price. But but there's not such a big difference. So I think this will be the same just in a few years. And I say a few years because this is going so fast that if we wait 20, 50 years, everything will be different. I mean, what we are talking now here, it won't apply.
Frankie Carrero: [00:29:07] Well, it looks and it always happens. Technology goes faster than legislation. So even if they try to to, to level, you know, or reach the all the, all the activities and all the needs that we have right now regarding data regulation or AI I regulation. Maybe when they get there it won't be enough. Or do you think that maybe the if companies could be closer to, to these regulations, to these, uh, well, to these work workforces, it could help go go faster. It could help to be more productive.
Speaker3: [00:29:45] But but at the.
Javier Yuste: [00:29:46] End of the day, if you think about artificial intelligence, what we what it does is to somehow replicate what a human being may do. So you get some inputs, call that data from customers, call that call to action. I want to to book a restaurant wherever it is. And you get an output wherever is that okay. You get your booking done. Do you do you get your essay for the university done by ChatGPT, whatever it is so regulating. Okay, this has an input what can be in the middle. But from a, I would say, a theoretical point of view, not not talking about how the algorithm works and what it produces, but but what it may do or what shouldn't be allowed to it to do. And you get the output. That's something you can regulate now. And for sure there will be details and, and, and new type of algorithms. And maybe you need to revisit it from time to time. But but saying okay, you cannot make automated decisions regarding uh, I don't know that take into account the, the sex of the customer or you cannot do things like that. It doesn't matter what what you get in the middle. I mean, at the end of the day, it's something like, okay, you cannot have a bias regarding this or that, and that's going to be the same.
Javier Yuste: [00:31:15] I mean, we don't also need to to change the laws every 20 years. I mean it they keep somehow stable. So in that sense, I think you can make make a regulation that last, it will have to adapt, but probably it will have to adapt to to new scenarios. Okay. If your car is going to drive by itself. So what happens if you crash? Who is responsible for this or not? Maybe you need to address that, but before that you need to decide that okay, a car can drive by itself and that's legally possible. And then because you have to, let's say, somehow prove that you can think about the legislation that has to go with it. So but but I don't think that's so difficult. Uh, I'm not saying it's easy, but but I think that's something we, we can do now and the European Union is what is trying to do with act to, to to give us a framework to, to work in, in a safer place. If that's a kind of burden. Burden in order to. To make progress with new technologies and so on. I don't have that view now, but we'll see in a few years.
Frankie Carrero: [00:32:38] Thank you for sharing your thoughts, because this is an important matter, at least to me. You know, time flies. We've come to an end of with this, with this conversation. It's been a pleasure having you here, Javier. And it's been an exciting conversation. Now, food for thought for after we finish with this. So thank you very much for being here.
Javier Yuste: [00:32:58] Okay. Pleasure is mine.
Frankie Carrero: [00:33:00] So what will be? We'll be back, in a way, in a couple of weeks with another exciting, I'm sure episode of this and wired podcast and well, the rest, the rest of the people. Please stay tuned and we'll be in touch.
Javier Yuste: [00:33:13] Thank you. Bye bye. Thank you.
Podcast
What has AI done for marketing lately?
Welcome to another exciting episode of our podcast where host Chris Brown, COO of Intelygenz, a VASS company engages in a captivating conversation with Chris Vriavas, VP of Product and Strategy at CONTENTgine. In this episode, our guest shares invaluable insights not just into the cutting-edge techniques that have propelled CONTENTgine to the forefront of AI-driven marketing but also how they've successfully integrated AI into their products and implemented strategic change management to embrace the technology. Discover how these key factors have led to the holy grail of creating true market differentiation and an impressive return on investment. Whether you're a marketing enthusiast or a business leader looking to transform your marketing game, this episode is a must-listen.
Conversations / What has AI done for marketing lately?
Chris Brown: [00:00:06] Hello and welcome back to this second episode in our series, The Unwired podcast series from VASS. Um, if you didn't see the first or you didn't watch or listen to the first podcast, um, as a recap, my name is Chris Brown. I'm the chief operating officer at intelligence, which is a division, a division of VASS company focused on AI production deployments. Um, if you did miss that first episode, I definitely recommend you go back and have a listen. Um, it was it was really setting the foundation of how how we cut through the hype of AI and really get to to ROI. But today, I am thrilled. Actually, I'm privileged to be joined by my good friend Chris Vriavas, who's the head of product and strategy at contention. Um, as we posed the question, Chris, of what has I done for marketing recently? Um, it's great to have you here. You're one of the masterminds behind contention success. Um, and it's great to have you with us. Thanks for joining us. Thanks for.
Chris Vriavas: [00:01:10] Having me. It's a pleasure.
Chris Brown: [00:01:12] Um, so first things first, Chris, explain to our unwired audience you know, what your role is at contention, what keeps you busy, and then we can go a little bit more into what contention does and, and and how it's using AI to impact the world of marketing. It's going to be a more obviously a more of an industry focused podcast this week.
Chris Vriavas: [00:01:32] Sure. So contention is a marketing services company. We we really specialize in demand gen services and lead gen services for for vendors. Um, but our mission statement is really to, to help educate all professionals and all industries to do their jobs better. So we we're focused on figuring out how do we get the right tools? Mostly technology tools in front of professionals so they can execute their jobs better. They can solve challenges at work. They can find solutions to their challenges at work. And how we do that is we've amassed a large amount of B2B vendor content, and we work to get that in front of the right people. We use a lot of, um, you know, we think about it as fairly basic targeting, but we're targeting various roles and functions within organizations and trying to help professionals do their jobs better.
Chris Brown: [00:02:36] I can tell you, there's nothing in my world basic about marketing. You know, I spend a lot of time around complex AI solutions, and I'm pretty comfortable around that, that arena. But the world of marketing still is. It's a little bit of a dark art to me. So I don't think I don't think even to think about it as basic. Maybe in your world it's basic.
Chris Vriavas: [00:02:56] Chris I'd say the same thing about the AI part of the equation too. But really, what we're doing is putting content in front of professionals. And, you know, if they're looking for solutions to their challenges. And we find vendors that provide said solutions. If we're doing our jobs right, we're we're creating a good connection that way. So we're really focused in on getting content in front of people, figuring out who those people are and trying to marry them to the vendors, such as an intelligence to find to find prospects and companies that are interested in those products and services.
Chris Brown: [00:03:31] And why is the just to set context? Why is the content and not just because it's in your name, obviously, right. But what why is the content so important in in the role that you play. Because you do things slightly differently at contention. So can you talk a little bit about just set that context of why why the content is so important?
Chris Vriavas: [00:03:50] Yeah. Of course. Yeah. So um, all of the interactions that happen, especially digitally and people are educating themselves, are based on, you know, they're engaging with content and they're trying to learn. They're trying to educate. Even a podcast is a type of content, and people are hopefully listening to this to learn a little bit more about how AI can help in marketing. So so the content is really the key center point of of understanding what, what people are looking to do in a buyer's journey. So again, going back to when professionals and companies and those companies are looking for solutions to challenges, they're going to be consuming content. That content can be case studies or very powerful customer success stories, white papers, e-books, podcasts, videos explaining how certain services or products can help. So that interaction, that engagement with content is key to everything, right. So from a demand gen perspective. Demand generation is really about, you know, how do you help educate professionals over time, whether they're in market or not? And the buyer's journey is going to be engaging with content and learning about various products and services, how they can help an organization. So without the content part of the equation, you really don't know what an individual or a company is looking to do. So content is at the core of that. We've seen a lot of companies using technology to better understand the types of companies they talk to and the types of individuals and companies, but we think the most critical juncture is what content is being consumed. So that content is key to everything in marketing.
Chris Brown: [00:05:40] Wow. And we should you know, I should declare you are one of our one of our good clients, right? So we've had many conversations around this subject. It's still a little bit of a dark art to me, I'll be honest with you, Chris. The whole marketing piece. Put the you know, we've had a lot of conversations about this. And when you when you dumb it down a little bit for me, we talk about, you know, even consumer journeys in marketing, when you're thinking about buying a car or buying a TV, I think we talk about the TV, the TV analogy, um, that a lot of, a lot of the decision before you even engage in any conversation, in any sales conversation at all, the individual spent a lot of time outside of your sphere of influence. Absolutely. And this is this is what you're talking about 100%.
Chris Vriavas: [00:06:28] So, you know, the TV is a great analogy. Cars as well. Tvs are you know, you don't necessarily know the brand that you want. Uh, when you start the process, you're looking for certain features the size of television, a type of television, etc.. And so you're going to go research that and look at different review sites. You're going to look at different pieces of content explaining reviews about what that TV can do, what it can't do, etc. so it is all about the kind of the buyer's journey. And in in B2B we are really selling products and services to to businesses. The concept is is the same. You're going to try to educate yourself on what's available, what are the different options, what are the different features and what they can do. And talking about the buyer's journey, what you know, what we're seeing is the digital buyer's journey, particularly right. How people educate nowadays is really over 80% of that is done outside of a vendor's website, outside of their kind of, um, their kind of, you know, in-house marketing, right? Their corporate site, it's people researching online, offline to a lesser extent, but it's really searching around and finding, you know, trying to triangulate what's going on. Because again, they don't necessarily Barely know the brand that offers the solution that they're looking for, right? They don't necessarily know that at the beginning of the buyer's journey. So a lot of that research is happening outside of a vendor, and less and less is it going to be.
Chris Brown: [00:08:03] So not even your content.
Chris Vriavas: [00:08:04] Correct. Correct. Now what we do is we take we can take a vendor's content and amplify it outside of that vendor's website. That's kind of part of what we offer. Right. We can, you know, so if intelligence has I know you guys have a lot of very interesting customer success stories on your site.
Chris Brown: [00:08:22] Thanks, Chris. That's a good plug. Yeah. We have amazing customer success stories. You're absolutely right.
Chris Vriavas: [00:08:26] But, you know, in order to in order to access those, you have to go to your website, right. So how does someone who doesn't know who intelligence is find that content?
Chris Brown: [00:08:36] Intelligence is I don't understand.
Chris Vriavas: [00:08:37] Well, hopefully after this podcast that problem is solved.
Chris Brown: [00:08:41] Sorry, I keep running. Carry on. I get your point. Yeah.
Chris Vriavas: [00:08:44] Um, so it's, you know, where does that content live? How do people find out about it outside of intelligence, right? So a lot of times I believe people don't. Know the the available solutions to the problems, just like in the TV analogy. You may not know all the features of a TV, or exactly what type of TV maybe fits best in your room. Looks like a piece of art, you know. Doesn't take up too much space, doesn't hurt your wallet too much. You don't really know that until you you start researching. So a lot of that digital buyer's journey happens outside of a particular vendor's website. Um, and, you know, the other trend that that's happening across marketing, of course, is the younger generation of buyers that have grown up in the digital world, right? That that's the that's very, uh, you know, very native to them to that's how you find out what you're looking for. That's how you research and find solutions. So the digitization of marketing as well, that's accelerating. So younger buyers are going to go research on their own, right? They don't necessarily give a particular vendor a call and say, Chris, walk me through what intelligence does.
Chris Vriavas: [00:09:54] That's not really how it works. They're going to look and say in your example, what? You know, what, can I help me with this? How can I help me in this particular problem? And who are the vendors that can maybe implement a solution for me? They're going to start their journey that way. They're going to go research about it before they even know who the brand is. So um, and again, the the buyers now, two thirds of B2B buyers at this point are under the age of 40. They're 18 to 40. So that generation is growing up in a digital first world. And I know you have kids, as do I. Their their ability to find something and find solutions online is is that's going to dominate the future. So that digital path and figuring out how to find solutions is really that's kind of what we're set up to try to try to help vendors do, help professionals do. But that's kind of really kind of changed that landscape of how people find solutions.
Chris Brown: [00:10:53] And I haven't lost sight that this is, you know, people are tuning in to to hear an AI podcast, not a marketing podcast, but I just and we'll get there. But I just want to keep setting the context, because I think it's important for when we get to that AI solution. In the last episode, I talked a lot to Frankie, who joined me, held my hand through my first first podcast about starting with always starting with the business challenge, because you can get, you know, you can get those, you know, carried away those starry eyes of AI, AI. But but really, you're still trying to solve a problem, right? You're still trying to address a challenge. You're still trying to go after a growth opportunity, whatever it happens to be. So I just want to keep on, on, on the context setting for a little bit because there's a couple of pieces that you mentioned there. So you're talking about people consuming content off your website, but also more, more and more companies. Are they becoming interested in Consumers that are consuming content in their space, in their sphere, in their marketplace, if you like, but not necessarily their content, because that still tells you something as well, right?
Chris Vriavas: [00:11:58] Yeah, 100%. And again, to go back to the TV analogy, if you're Sony and someone is researching a Panasonic, of course you want to know who that is because they're clearly in the market for that solution. So one of the things that our solution allows is because we are marketing, we're promoting, we're syndicating content from all kinds of vendors in particular product categories. What it allows is for us to generate intelligence, seeing who consumes what type of content in a particular product category so that a vendor A can see who's looking at vendor B and vendor C, or looking at all vendors in a particular product category, and that gives them a better sense of a are they in market to potentially buy a product or service, and b if so, who are they looking at? And that context? You know what we've seen performance wise from from some of our clients is that they actually find that their conversion of leads or intelligence, where we we tell them, go after these people and these companies, they they convert better when people we find them engaging with content that's of their competitors. We we see a lot of scenarios like that. So, you know, whether whether you know, it's it's the Panasonic or the Sony or the, the LG TV, anybody consuming any of that, it's very powerful. That's an indication that they're in a buying journey.
Chris Brown: [00:13:25] They're in the buying journey for for a product that you sell.
Chris Vriavas: [00:13:28] Yeah, exactly.
Chris Brown: [00:13:29] And and my this is just intriguing for me. Right. So give me humor me for one more and then we will get on. We'll get on to the I. So we're not saying the art of sales is dead, right. The art of selling is still is still there. What we're saying is the likelihood that you're going to have a very different conversation and much more educated conversation, less about educating your buyer about your product. Still some, I'm sure, but less about educating your buyer about your product and more and and more defending it, explaining a little bit more, filling in the gaps of their knowledge. It's going to be a different conversation in the sales cycle.
Chris Vriavas: [00:14:06] Yeah, sales is never going to be dead right. I think what what changes a little bit is the education that can happen before a conversation with a salesperson at a vendor, because of the amount of research and content being produced. Um, you know, so that conversation, um, you know, will be different, but also the, the, the salespeople can arm themselves a little better now if they understand a little bit more about what's been consumed. And that's really what our play is. Yeah. What have they consumed before that. That will give you important tidbits to talk to them about. So, you know, if you know that there's their challenge is X right. The first conversation you have with them you want to be talking about X. Now you don't want to preclude and say I know what your challenge is, but if you have that insight beforehand, right. If you know that you know they've got a specific challenge and you can try to address that, that might help you have a better conversation, more productive conversation.
Chris Brown: [00:15:10] Yeah, yeah. Okay. Let me drag us. And when I say drag us, I really mean drag me back to the the subject of AI because it's great context, right? I think you set the context really well. What what contentions ambitions are. I'd love you to talk to us about how you see on a big scale that your big ambition, and then we can get on to some specifics later. But but the big ambition of how you're intending, or what you'd really love to see your end goal as when you're when you're using AI to help you on your journey to help your clients. Yeah.
Chris Vriavas: [00:15:49] So, so AI has been used a lot, I think, in marketing services in general, really on the pure data front. And, you know, kind of I'll dive into that a little bit. And what I mean by that is the the contacts, the professionals in particular companies and the companies themselves. How do you predict what type of company might fit fit your ideal customer profile, your ICP, and which individuals within that company might be decision makers or influencers? Influencers in a decision. And we do that too. That's very powerful. But I think we felt that a lot of the the people in our space are focused much more on the data side of the equation. Um, we were much more interested in what that who those individuals and companies are and what they're engaging with on the content front. So instead of and we've done some work on on predictive modeling for contacts and companies, but instead of focusing on that, we really saw a need to rationalize and understand the content that's being consumed and getting an understanding of what that content is for, to try to tie back to figuring out the solution, the challenge they're trying to solve. Right.
Chris Brown: [00:17:03] So you believe the content can can tell you more about the the purchase intent than maybe the the structured profile data and the company data. Or maybe it's an augment, right. It's an and I'd say maybe.
Chris Vriavas: [00:17:16] As much as important.
Chris Brown: [00:17:17] As important as. Yeah.
Chris Vriavas: [00:17:19] And content because you know again there are the content production in the world has just skyrocketed over the last decade. And it will continue to do that. More podcasts, more pieces of content. In a recent Content Marketing Institute CMI report in 2024, 45% of marketers are going to spend more money on content, 42% will spend as much as they did in 2023, right? So at very at a very minimum, despite any macroeconomic headwinds, they're spending at least as much, if not more on content. So the amount of content continues to explode. And then within content, how do you glean from a case study, a 3 or 10 page case study, what it's solving, how do you kind of process that? So it's easy to say, you know, Chris Brown from intelligence consumed this white paper, but what is that white paper. What is that case study about. What is that podcast about. Right. And so we feel that understanding that and pulling insights out of the content is key to understanding what they're what they're trying to solve in that buyer journey. So how do you process all this text or audio in the case of a podcast and really pull out insights to say, aha, this is what Chris has researched. This is what Chris's colleagues have researched. Maybe at intelligence for a solution they're looking for. And therefore we think these are the challenges they're trying to solve. And if you think about, you know, again, the complexities of all the text and all the, the, the volume of content, you know, understanding what, what that content is trying to achieve and understanding that at scale is important. We're you know, we're adding thousands of pieces.
Chris Brown: [00:19:09] It must be hundreds of thousands of pieces.
Chris Vriavas: [00:19:11] We've got hundreds of thousands of pieces. There are hundreds of thousands of pieces produced every year. We get new client content from our clients every month, new content that we go out and find from vendors every month. That's thousands every month. So it's not just the sheer volume of the content, it's then the type of content in all of the the what is the content about? It's not it's not a zero or a one. There's complexities in content. So processing that, understanding really what that content is about and what it can tell us to use really, you know, for us for the marketers and the salesperson, as we talked about earlier, that's key. Can we figure out how to arm that salesperson, arm the marketing department at a vendor and say, look, this is what they're this is what they researched. That's great. Here's the case study they consumed. Here's some white papers they consume. But here's exactly what those pieces of content are intended to do. This. This is the the solution that they're looking at. This is the challenge they're trying to solve. And doing that at scale was um, you know, that is a challenge. It's a data challenge. It's a different kind of data challenge. And that's really what we set out to do with AI.
Chris Brown: [00:20:26] Wow. So I know because I'm involved in in some of the projects that you guys are doing with AI, that you've got a number of different aspects that you're that that you're going after, right? That you're, that you're attacking, you know, this, this elephant sized ambition. There's one biting it off one chunk at a time. Um, and you've made some good, good inroads. You've made good headway into, into this journey pretty pretty quickly. Um, and I'd really like to, if you're willing to share in, in public. Right. I really like you to talk a little bit about some, some of the specifics of those solutions that that you're implementing in order to, to solve this big ambition of understanding that content journey that that a buyer is or a buyer journey that, that, that someone's taken through through the consumption of content.
Chris Vriavas: [00:21:19] Yeah, absolutely. Happy to share. And this is part of what our offering is.
Chris Brown: [00:21:23] I'm glad you said that because that's the whole point of being here, Chris. Yeah.
Chris Vriavas: [00:21:25] But so so, um, you know, we had done some preliminary modeling to set the stage around content to pull out basic aspects of the content. Who's the vendor? Um, you know, maybe some metadata on the piece of content when it was produced. Um, but one of the first projects we engaged with intelligence is to pull out key phrases. Right. So what are the the key phrases within a piece of content that we can pull out and try to get a better sense of the topic, right? What is that? What is that solution? And the challenge? The challenge and the solution pairing. Um, so we've kind of been able to initially pull that out and then say to a potential client of ours, hey, this vendor is consuming this content. Great. Okay. What is that content about? Here are some key phrases and the topics that they're that this content is about. Um, we then.
Chris Brown: [00:22:18] This content can be thousands of words long right. It can be multiple pages.
Chris Vriavas: [00:22:23] Yeah.
Chris Brown: [00:22:23] It can be audio transcribed. It can be could be audio. I by the way, yeah.
Chris Vriavas: [00:22:27] We're focused much more on, at the moment. Text based content, but could be audio, could be video based as well. Transcribing that to text as well. Um, so so so again what what the key phrase can do is start to pull out that meaning and give someone great. Chris, consume these five case studies. Okay, I get it. So now do I have to read all five case studies to figure out what Chris is interested in? How do I get some key phrases out of that to try to understand where Chris is going? What what what he's what he's trying to solve. Um, and then we can kind of cluster some of those key phrases together to get a general sense of what people are looking at in general, in a product category, what people are looking at across a company, what people are looking at across an industry. So putting that together. So that was really our initial breakthrough. And that was something that, you know, we hadn't seen anybody else offer in our space. Right. They haven't really gone down that path. We've also done some work on NBA as you know next best asset. So you know, this is a traditional digital marketing concept where based on you know what what people have consumed, they may also want this. So if you go to Amazon people who like bought this also bought this.
Chris Vriavas: [00:23:51] But but doing that in a way that helps the buyer journey, that was really critical. So find you've got these key phrases. Now you've got some vendor information, some metadata on the content. How do you find how do we push them down a journey? You know, our goal is to really, again, help all professionals do their jobs better. So how do how do we do that? Let's let's give you more content that is aiming to help you solve your challenge. You know, specifically. So that's been a big breakthrough. And that helps not just in a user journey. Coming to our our flagship website content Com where we have, you know, hundreds of thousands of pieces of content. So it's not just the buyer journey. It also helps us on the targeting end. It's how we do some of our email outreach, how we know what potentially to send to Chris or other employees and intelligence, because we can kind of see, you know, this is kind of this is a this is a way to think about what what content to use. So the MBA has been key. But but what we're really excited about is something that that we've accomplished with intelligence over the last kind of several months.
Chris Vriavas: [00:25:03] It's what we call business need. Right. So and for me, this is kind of the, the, the head of the spear, if you will, for what we're trying to do with engaging professionals and understanding this, this juncture of of the people, companies and the content. Those are the three pillars of our pyramid. Who are the professionals, who are the companies, who are the content and the business need is taking a lot of this other work we've done and saying, okay, this account, this is based on the key phrases that that we all the information we've gleaned from the content they consume. This is our best guess on what their business need is, right? This is our best guess of what this account is trying to solve. Um, and you know, nothing in marketing is perfect, but it gives you, as we said earlier, we can maybe arm that salesperson, maybe arm that marketing team with. This is what we think this account is interested in. So hey, marketing team, maybe you need to nurture with content when when you get them as a lead. Maybe you nurture them with topics around this business need or or Mr. or Mrs. Salesperson. If you're going to do some outreach and give these people a call, this is what we think their challenge is. And so we've kind of crystallized that down to a business need, and that's really exciting.
Chris Vriavas: [00:26:26] So we're able to offer a lot of this intelligence through our platform, but we can also append this intelligence to leads. We also generate leads for clients. And leads have typically been you know, here's the demographic information. Here's the firmographic information. They consume this. Well now we're able to say, and this account, here's their business need. And that's what we're really excited about. So you know that's that's been the, you know, kind of three pillars of what we've worked on so far with content. There's much, much more to explore. And one of the things that we're thinking about is getting a better sense of the engagement rates, like, what are the types of content that just just seem to spark people's interest, whether they're at the beginning of a buyer's journey, they're just doing some research. So understanding kind of the content engagement and which types of content perform better is another avenue we want to explore again in tandem with all the details about the content. So why, why, why does this piece of content seem to have a better engagement rate, either in an email or on a website, than that piece of content? That's that's one of the areas we're looking to explore. Yeah.
Chris Brown: [00:27:39] And we've started a little bit on interest score as well Chris. It's fascinating stuff. Right. It's absolutely I could sit and listen to this. I'm just intrigued by the whole piece. As I say, maybe it's because I see it as a little bit of a dark art and and, you know, bringing some light to that is really interesting to me. I think if I flip the tables, if I look at the projects that we've done from, from the from the other side of the table, it's been super interesting. From our perspective, this has been a great journey for us, right? We know that. We talk about it a lot because we've we've been embarking on very interesting technical considerations. So your project was one of the first in the last episode, I was here with Frankie a couple of weeks back and we were talking about consumption of AI. We started the the business need piece that we did with you on a open source academic model that we were tuning. It was an ML model that we were tuning in order to to get that business need. And in the middle of that project, GPT opens its doors and we suddenly look and go, wow, the speed at which we can accelerate that. That wasn't the consumption option that was available to us when we when we started the project. And we flipped really quickly and pretty easily, actually, to start to test two models together. Taking into consideration, I know you guys consider it greatly as to I have hundreds of thousands of these these documents. What is the cost going to look like. Right. For me to to own the model versus what's the cost going to look like for me to hit GPT with, with, with hitting the business need? But the the results from, from using model as a service because the power of GPT was, was just so vast that that we flipped our journey and went then.
Chris Brown: [00:29:23] And so that's an interesting aspect of this, this project. Another one is we still don't get to work on many unsupervised learning projects. And and we've been embarking on, on unsupervised learning in that, you know, in that correlation of the business journey of if we start, you know, there's no labeled data for that, right? It doesn't it doesn't exist. So, so, so, so our guys are getting to to really, you know, flex their muscles in their unsupervised learning aspect of how can I do correlation where I'm not presenting any rules to to an engine to understand what can we can we discover business need and make that go? So as fascinating as it is from from the business side and valuable, it's incredibly interesting. It's been incredibly interesting for us in, in understanding how we do flip from a, from a project where model as a service becomes available. How do we do that contrast and compare? How do we across all of the things that that I talked about in the last episode. How do we get to flex our muscles a little bit in the unsupervised and unsupervised learning world where we've done some projects, don't get me wrong, but they're still few and far between. So it's been it's been a it's been a hell of a good match, I'll be honest with you. And that's a bit of serendipity in that. Right? I take that and but you need a bit of good luck along the way with some of these things. But yeah, it's been super interesting.
Chris Vriavas: [00:30:44] It's definitely interesting. And I think one of the, um, you know, you talk about dark arts of marketing and some of some of what you stop saying that.
Chris Brown: [00:30:52] That's just my some.
Chris Vriavas: [00:30:53] Of what you just went over is some dark art to me. You know, um, obviously I understand these concepts in general, but the technology and the aspects, you know, I don't have that sophistication of of it. But, you know, I think in general with, with a lot of this technology, um, you know, there's, there's part of people that are worried, what does this mean? What does it do? You know, it's just it's a worry. It's a concern. There's a flip side to that, that I like to focus on. Which which you just encapsulated which. This is exciting.
Chris Brown: [00:31:25] It is exciting.
Chris Vriavas: [00:31:26] This is where things can go and this is how we can improve processes. This is how we can improve marketing. And with what we're doing as an example, we're not trying to convince you to buy something that you don't need. We're actually trying to convince you. We're trying to help you buy something that will help you do your business better. Right. And so all of these elements come together for yes, it's marketing. Yes. We're not curing cancer with this, but it's to try to improve a process, try to improve a buyer's journey for real needs. And so exploring all of these things, as basic as some of these concepts may seem, they're it's a lot of greenfield. It's not it's not really been pursued yet. And so I look at it as there's a lot of interesting potential, and I think we're just scratching the surface on where this goes.
Chris Brown: [00:32:14] And I think I think what's really interesting, Chris, is even for our technology folks, and we're a we're a deeply technology based business, right? We we live in the depths of technology. You you can't you can't replicate that excitement without there being a known business value. These guys still want to work on something that is going to generate the business value that you've just described. So that's where that's where these journeys and projects really come together. And that importance of it just flows all the way through a project when when everybody in the project knows whether you're a data scientist or whether you're running the product of the company, that the business value, the goal is there. That's what generates the that enthusiasm, that excitement that I think we're seeing throughout this project.
Chris Vriavas: [00:33:03] Yeah, 100%. I think one of the things that that we try to do a little bit with, with your team as well, is try to connect the dots and explain what our clients see, because when we show them these key phrases, when we show them the business need, there are light bulbs that go off, right? It's really fascinating. And again, for us, some of it at this point, because we've thought about it so much, it seems quite commonplace, it seems quite basic, but those connections are important. And I think, you know, it's important. It's important that this technology is doing you know, I like to break this all down to the basics of what is the what's the point of this technology. It's great for me to be able to say, sure, I work in AI a little bit, but to what end? You know, it's all to what end. And some of this end is really fascinating. So we're always trying to explain to the team as well. This is what we're hearing in the marketplace and it really resonates. People are fascinated by this because some of this, this work we've engaged with hasn't really been done before. I'm not saying this is the most groundbreaking AI that exists, but it doesn't really exist in our space, and it turns a lot of heads.
Chris Brown: [00:34:15] That's brilliant and it is fascinating. I've been fascinated. And Chris, it's amazing how time flies. We are out of time. Believe it or not, it's gone. It's gone super quick. Um, thank you so much for being with us today. I really appreciate you taking the time out of your busy schedule to come and to come and talk to us. Um, it's been fantastic. So thank you. Thank you really sincerely for for sharing your time with us.
Chris Vriavas: [00:34:38] Thanks for.
Chris Brown: [00:34:38] Having me. And the insights. Right. It is. It's a fascinating subject. And as you say, the it's the combination of the I in the space that is that is potentially groundbreaking. And it takes both of those things. So thank you. Thank you again. Thank you. Um and that's great stuff. So that's it for today. I will remind us we'll be back in a couple of weeks in this room or another room that looks similar to this, um, to immerse ourselves in the next episode of the Unwired podcast series. But until then, once again, Chris, thanks for joining us. Thanks.
Podcast
AI Hype to ROI
Explore the practical side of AI in the first episode of The UnwAIred hosted by Chris Brown, COO of Intelygenz, a VASS Company and Frankie Carrero, Director of Data & AI at VASS.
In an era of AI hype, we dissect real-world applications, revealing its impact on revenue, cost savings, and customer satisfaction. From off-the-shelf solutions to custom models, gain insights into successful AI integration and debunk common myths.
Tune in and discover the authentic potential of AI for business success.
Conversations / AI Hype to ROI
Chris Brown: [00:00:06] Hello everyone, and welcome to this inaugural episode of The Unwaired podcast series from VASS. My name is Chris Brown. I'm the chief operating officer at Intelygenz, which is a department of VASS company that specializes in AI production deployments. Now, I'm going to level with you from the very beginning here that podcasting is brand new to me, but I'm thrilled to have alongside me my co-host Frankie, who is our director of Data and AI here at VASS company. Welcome, Frankie. It's great to have you here.
Frankie Carrero: [00:00:39] Thank you. Thank you very much. It's great being here today, and I'll try to help you as much as I can with this podcasting thing.
Chris Brown: [00:00:45] You got to promise to try and hold my hand through this, Frankie, as we get through this first episode. But we've got a we've got a fantastic series lined up. We've got a great set of topics to cover over the next few weeks. Um, all different subjects across the AI range. Um, but I thought, you know, let's have a look at what we've got planned for today and and just introduce the episode, if you don't mind. Please, Frankie.
Frankie Carrero: [00:01:07] Yeah, sure. We're going to be speaking about AI we've all heard speak about AI, but there are there's a lot of confusion around it. Uh, many different views, many different, uh, media speaking about things that are complex for businesses and also for, for for people and for society as well. So we need to make to try to make sense, especially when it when we bring this AI significance to, to the business, to the business side of the world. And we we have to keep up with a technology that is always evolving. We know that there are some things that some challenges that we need to address. So we'll be speaking with a lot of people who knows a lot about artificial intelligence, and they're going to help us to try to clarify all those things.
Chris Brown: [00:01:50] That's great. And I'm really looking forward to the series. I think we've got some some great guests lined up. We've got some great topics to talk about. Um, but I think a good place to start. Right. I think we should dive in. And I want to ask you a question, because if I'm sitting at home. Right. I'm a listener and I'm listening to this podcast, I happen to stumble across this podcast in the ether. Right. And and I'm figuring out should I pay attention to I is it for me, you know, should I care? Right. Who should be caring about I or is it just for these, for the tech, the heavy tech guys over in Silicon Valley? Or should should you be paying attention to the subject of AI? What would you say to that?
Frankie Carrero: [00:02:27] Well, I would say that if you live in society these days, um, and I think most of the people do, you need to take care. And you should care about AI because it's everywhere around you. It's everywhere as a person and it's everywhere as a company. As I was saying before, this is not only for something for tech. Um, cutting edge companies in Silicon Valley is for everyone all over the world right now. And we know that some of the best companies are in us, also in Europe and also in China and other countries around. So. Well, we can say that this is something, something global and relevant for all kind of industries, from health to industry to retail, whatever, whatever, whatever sector or market that you think of. They will be using AI to, well, to optimize the operations also, for instance, enhance the customer experiences. So there are many different things that they can do to leverage artificial intelligence. And if we think about people, how people can perceive, for instance, your mobile phone is full of applications that use artificial intelligence to improve the pictures that you take, for instance, or just to to talk with some of those applications. And we'll at some point we'll sure that we'll speak about ChatGPT. And ChatGPT has an application that you can interact speaking with. So it's, uh, it's something that we, we all have. And also it's not only one thing for for leisure, but but it's also something that we can, we can leverage as people, for instance, because it helps us to avoid fraud. And as fraud is not only important for banks, for instance, it's also for for us people. So we we can we can consider that it can help and it can protect in some ways. And I know that, uh, not everybody thinks the same. There are different opinions about these matters, but I'm sure that we'll get those opinions on on the next episodes.
Chris Brown: [00:04:33] Well, I was wondering how far we're going to get through, how many episodes we're going to get through before we said ChatGPT. And we actually we didn't even make it five minutes, right? Which is unsurprising. It's incredible technology. Yeah. Um, but yeah, you win the record, right? We got five minutes and we said the words ChatGPT. So I'm sure that will come up many times over the next, over the next weeks as we as we go through these different topics.
Frankie Carrero: [00:04:56] And it has changed the the AI world as we know it today and as we knew it a year ago. So I think it's definitely something that we need to to dive, dive in.
Chris Brown: [00:05:07] Oh for sure. I'm going to try and get I'm going to try and get Jonas, who's who's our head of our AI practice at intelligence on on one of these podcasts. He's not a he's not a it's not a guy in many words. Right. But I'm going to try and get him on to talk about that exact subject in a in a few weeks time, because where we started nine years ago to where we are now, you say AI is moving fast. I mean, nine years is like light years in the world of AI. We're talking about things that can move in in milliseconds and, and, and just at speeds of, of new developments that we've just never seen before, where, where we're developing projects that we just wouldn't take the same approach again. And we're six months down the line, not even six months down the line. And the same approaches is not the not the most efficient and valid approach anymore. So yeah, it's definitely it's definitely moving quickly.
Frankie Carrero: [00:05:57] Yeah. At the same time, uh, continuing with the ChatGPT thing, we need to make sure twice now. Yeah, twice I know that. But we need to make sure that we all know that it's not the only way to have AI or to create AI, because we've been dealing, like you said, for, for a long time, with many different algorithms and solutions that apply artificial intelligence to, we can say, uh, make the world better or that's the thing that we should aspire to. Uh, so we'll try to have someone also to speak about what's beyond ChatGPT. So what other kind of applications, what other kind of methods and, and algorithms even we can use to embrace artificial intelligence and and leverage it as well? Yeah.
Chris Brown: [00:06:42] I think it's a good point that you make. I think there's there's so much hype around around AI. And a lot of that hype right now is is also focused very intensely on GPT or ChatGPT or the underlying model of GPT. And, you know, we live, we live and breathe in this world of AI deployment day in and day out. That's what we do as a as a business. And I can tell you there's a lot of there's a lot of great work and, and great projects and great solutions out there that are, that are beyond beyond GPT. Right. Beyond large language models, beyond generation. You know, the world seems to have forgotten a little bit about classifications and detections and predictions, right. So, so so that world is you know, that world is still there. So it is about cutting under the hype and looking for where where in your organization, where in your business can you can you adopt AI for good. Right. For for making those efficiencies, those gains, seizing those opportunities, those growth opportunities within your business. And it's key that we don't just get laser focused on large language models and generative AI in its own right, as amazing as it is and it is fantastic. And we do we do a lot of work in that, in that space. But but we when you're starting your journey, it's really important to to keep that breadth of vision, right? To really start to think about what it is that your organization needs from a from a business.
Chris Brown: [00:08:09] It's no different to any any other business challenge. You look at an AI or non AI, right? We shouldn't be starting with the technology. We shouldn't be starting with the technology. We should be starting with what is it that we want to achieve? What what are the things, what are the challenges that we're trying to address. And then work out on the map of of AI capabilities? Where should I be? Where should I be picking a solution? Where should I be targeting? You know, the kinds of models and the kinds of approaches that I want to take in order to solve that challenge, but that really should should come first. And I think it's a topic I want to talk a little bit about today. Um, because even the way we consume AI has massively changed in such a short space of time, and I'm going to be guilty of it now. Accelerated by OpenAI. Accelerated by by other foundation models that are that are coming to the market. Because if I think back, you know, nine, ten years ago when when we started our first journey on the new dawn of AI, um, really, we were looking at custom built models, models from scratch, taking models from academia, changing those models, having to host those models, train those models.
Chris Brown: [00:09:23] And the world has changed so dramatically and drastically. That's still a valid approach for a lot of, um, for a lot of our projects. But but it's not our only opportunity now, right? So if you start to to look at some of the historic reports and you can pick any, any of these clever, uh, these clever industry industry studies. But, you know, you'll see numbers like 85% or 85% of AI projects never make it into production. That means 85% of AI projects never have an impact on ROI or have minimal impact on ROI at best. Um, and those numbers are tumbling and part of part of that, that that I should be I should clarify those numbers are improving. The 85% number is is tumbling. Right. It's getting better. It's getting it's getting, uh, more successful in terms of AI results. And a lot of that is down to the consumption methods of, of AI. And and it's a topic I talk a lot about to our clients. And it's a topic that I think is overlooked, but it should be really, in my opinion, the first topic that you're considering when you're going on this AI journey.
Frankie Carrero: [00:10:42] Yeah. So especially when you want to get the ROI the best ROI for for your business. You know you need to use different methods of consumption depending on the situation. They have the context. You have the money, you have the investment, the maturity, the level of maturity that you have in your company. So maybe you can just go on and tell us what are those three methods and explain it a little bit. Yeah.
Chris Brown: [00:11:06] So there's so the ROI point that you make is really important. Right. You've got to be eyes on the prize of this is not a technical exercise. When you're when you're deploying this and you're investing in any technology, including AI in your business, ordinarily you're looking for an ROI. You're looking for some return on that investment in order to in order to create a gain for your organization in some way, shape or form. And that's really important. But it is not the only consideration when you're when you're thinking about consumption of AI. So If we address roughly where we see the three layers of consumption today, very different from just that custom model, that model from scratch. Example again, still a valid and still a valid foundation for any solid project and some really good, some really good benefits for taking that route. But it's but it's heavy, right? You've got to you've got to manage that whole AI life cycle. Your data's got to be managed really well. It's not just about focusing in the model exploration, the model selection, all that sexy. I work from the data scientists in the middle, which is super sexy. We get that right. But you've also got to then get it out of the lab into production version. Control all the MLOps piece. Right. You're going to be responsible in the custom world for all of that piece. You've got to manage that real end to end life cycle.
Chris Brown: [00:12:30] And that's why we saw numbers like 85% of projects were not making it because the tools weren't ready, the capabilities weren't ready. That that that is maturing even at this layer that's maturing really well. And those numbers are getting more and, you know, more and more successful. But above that layer we've got two other layers of consumption. The next one is model as a service. And at the top we've got off the shelf AI capability. And let me just jump right. Let me jump to the top. And then we'll work my way back down to model as a service. So if you think about off the shelf capabilities in AI, this is where AI is the hard work of the AI training all of the model, that model selection, that getting the data in the right space. That's all been done right. It's been done by these incredible companies, and the map of off the shelf AI capabilities and solutions now is just exponentially just growing like crazy. And and all of those high powered AI solutions, you don't need any AI capability. You're starting to consume them like software as a service because that hard work has been done right. So? So now in this world, you're getting things like voice over 11 labs.io. I pick one from from somewhere. So 11 labs.io creating really authentic voice over on videos. Maybe they'll they'll apply it to my to my crazy accent.
Chris Brown: [00:13:57] Right. And they'll do an 11 labs.io on top of this podcast and or maybe. Right. But but it's that's an off the shelf capability where you're using an incredible power of AI in the audio space to generate very authentic human voice that was more or less impossible three years ago. Yeah, more or less impossible three years ago. And now it's an off the shelf capability. You subscribe very cheap, you know, and low capability from from a consumer point of view because the hard work's all been done. And as I say, there's a map of all of these SAP delivering capabilities within within their within their portfolio of products. Salesforce. All of these platform companies are creating built in AI capabilities that are going to really optimize your business. But I go back to those considerations. Otherwise, one of them, when you think about, well, why am I doing this? Why am I? Why am I embarking on my journey? If part of that is I want to create differentiation? Well, off the shelf, maybe isn't the place you want to go. If you want to create optimization and you want to create efficiency. That's a great place to go, right? It's a great place to go. But but differentiation is unlikely because these tools are available to everyone to consume at low price. So they're really aimed at optimization and efficiency. So this is what I mean about that.
Chris Brown: [00:15:16] Those considerations right. They're low cost. They're fast. You don't need high skills. But you're not going to get much differentiation. You're not going to create any IP for your organization. Right. Um, but for efficiency. And if they fit exactly what you want, they're rigid. Right. So that your data has to match their use case, the hypothesis you're trying to solve for, you know, 11 Dario does incredible voiceover, but it's not going to do product prediction or churn prediction or classification of ticket inbound tickets, right? It's not what it's built to do. It's rigid. It's built to do one thing. Um, so low flexibility, but really great place to get started on your AI journey. If you're starting to think, hey, how do I how do I just start to play with, with these capabilities that are in the market? So we've got that. Now, you can imagine that 85% of unsuccessful projects is tumbling when it comes to this, because now I'm just consuming really successfully. Now it's just you just have to apply it to your solution. So that's a great that's a great place for for that kind of consumption. The second that that second layer that we didn't talk about, you know, I talked about the custom layer at the bottom that that model as a service layer here. This is where I'm going to say it. Chatgpt. Right. This is why it's not ChatGPT it's GPT, right.
Frankie Carrero: [00:16:29] Llms.
Chris Brown: [00:16:29] In general. Llms in general. Right. Or they may be Llms they could be any foundational model that is, that is being trained with hundreds of millions of dollars, hundreds of millions of hours, hundreds of millions of compute hours, right in training time that are just so powerful on solving very general issues. And and these are models like GPT, like, um, like Dall-E, where you can start to lighten the load, you can still get to custom model, but you're like lightning, the load on that, that wide AI life cycle in terms of how much you have to cope with as a as a business, and you start to get extreme power and extreme customization, because you can take that foundation model and you can start to tune it with different techniques, whether it's prompt engineering, whether it's rag, you can, you know, you can enhance it with your own data sets privately. Um, and, and, and you can really start to get to a very powerful custom model That is impactful to your business. Very unique. Now you are starting to drive differentiation within within, within your business. But there are other considerations for for software. So model as a service as well. Right. That that are in that stack of of conversation.
Chris Brown: [00:17:52] So things like can I legally use the model because you're no longer hosting it. You're not. You no longer have the choice on where that model is hosted. And if you have a a challenge or a solution that requires data that can't leave your firewall, for example. Well, model as a service is, is is today is not a is not applicable. Right. So and if you're not considering that when you start you you could end up with a great technical solution because you've taken some synthesized data or some data, you're allowed to leave your firewall and you've trained this model and you get a production like, oh, I actually can't use it in inference. I can't use it in real time because this data can't leave my firewall. Well, that's a question we need to be asking way up front to keep yourself out of that 85% number, right? You need to be asking that. Or you might have a sovereign data issue. Your data may be allowed to leave the firewall. You can go to the cloud. But these models are hosted in certain countries. And if your country is and the data you're dealing with can't leave the country, you've got the same problem, right? So the legal T's and C's, you've got to understand the commercial model.
Chris Brown: [00:18:59] The model of the Llms are incredibly powerful and they tend to be very reasonably priced today. But if you're hitting them with massive amounts of of of API calls into into that commercial, in that commercial boundary, you need to be very careful on your commercial considerations that you might outprice yourself where you need to go and think about a different solution, a different consumption method, or be really clever within within your project and architecture where you're minimizing those those tokens that are hitting the that are hitting the LM or whatever the foundation model is that you're using. So again, you're you're really looking at these these considerations are super important. And if you got past the first question, which I agree was a leading question, right. Of should I care about AI? Of course I think you should care about AI. Um, the the next step is really to to don't don't jump over this whole how should I consume? How should I select my challenge and how should I consume my AI. That is the place I would start.
Frankie Carrero: [00:20:01] There's so many interesting things.
Chris Brown: [00:20:02] That was a monologue for no.
Frankie Carrero: [00:20:03] No, don't worry, don't worry.
Chris Brown: [00:20:04] I have. I told you I was new to this Frankie.
Frankie Carrero: [00:20:07] Okay, I'll try to. I have a couple of things that I think that we can discuss as well. One of them is is behind all of this that we've been talking about. I think there's the concept of AI democratization, you know, and this is only not just for small companies, but also for big companies, because you were speaking about foundational models, and these foundational models have been built. First with tons of data, tons of information coming from different places. And there are only a few companies in the world who are able to have the power and the money to to invest and develop these models, and then these models can be used and enhanced and tuned by other, other companies to make them something, um, personalized for them, for the, for their particular, um, challenges or the projects that they want to solve. So this is something that has really made the use of these advanced AI projects something that every company can use. What do you think about this?
Chris Brown: [00:21:11] I think it's absolutely spot on. I think the the amount of investment that has gone into foundational models can only be achieved by a handful of companies in the world. There's just no there's no question about that. A handful of companies and maybe a couple of governments, right. That can afford The level of investment that has gone into creating these these foundation models and having those presented back out to the world in a commercial model. I'm okay with that. There's a lot of lot of investment gone into it. I think the right word is democratizes the access to incredibly powerful AI for for companies to start to access that power, that capability to tune it to their, to their very specific need. That wasn't that wasn't possible from from less than a year ago. Right? Yeah. What were we talking about? Chatgpt. I got time flies for me. I can never remember, but I don't think we were talking about ChatGPT maybe a year ago. Right. But it's not much more than that, right? It's. It's incredible the speed that that it's. Come on. And it will, it will go beyond that, right? It will go beyond. There'll be orchestration of different large language models. There'll be orchestration of different foundation models with different capabilities. Vision models a whole plethora of of of capabilities that are coming out there. But that that movement to expose that investment to the market and allow organizations to to piggyback on that investment. That's really what what's happening piggybacking on that, that training investment that happened in order to just do the, the final yard of or the final meter as we're in Spain, right of of of training is, is just opened all, all avenues for all people. And I don't even think it's just businesses. I think you're finding in your, in your daily life and your daily consumer in your daily consumer world, you'll, you'll be finding that you're accessing tons more of of of I because because they've opened up that investment and they've, they've moved it.
Frankie Carrero: [00:23:15] For.
Chris Brown: [00:23:15] Profit. But yeah but that's I'm okay. I'm okay with that. I'm okay. There's a lot of money and a lot of time and investment went into that. So yeah, it's a great one and I'm not sure how we're doing on time, but.
Frankie Carrero: [00:23:28] I have another topic, if you don't mind.
Chris Brown: [00:23:29] I would I would like to ask you a question. No, you might. If we've got even more time, you can choose your topic. But I'd like to I'd like to ask you a question. Right. Because you're involved and I've done a lot of talking frankly. Right. So I think I should open the floor to you. But you you've implemented a lot of AI projects over the years a lot. Right? Is there anything, any hint, any tip you would give to someone that's embarking on their AI journey? Aside from the consumption conversation we just had there on how they could maximize the ROI of their project? Sorry to put you on the spot.
Frankie Carrero: [00:24:08] No, no, no worries. It's fine for me to speak about that. And it's true. I've been working on AI for a long time, more than 25 years, so it's. Don't age yourself, Frankie.
Chris Brown: [00:24:19] We're looking great under these lights. Don't age yourself, okay?
Frankie Carrero: [00:24:22] Yeah. I'm not, I don't. Maybe I look younger than I am. I'm not sure, but it's the first years. It was like everything was manual, artisanal. Um, they needed to do everything with small steps, and it was really difficult to put a project in production. But now it's so easy that it's also easy to, you know, to misuse the money, invest the money. So what I say, if you want to in some way to to maximize the ROI and try to ensure that the project that you're going to start, the AI project that you're going to start is going to be successful, maybe not in terms of the results, because that is something with AI that is not always easy to to know before, but at least if you want to be sure that you are going to do everything by the book, the first thing I would say is you have to. To start small, you need to try to define and execute a small project and see how it goes. If it goes well, if the results are okay, if the business people understand what you're doing and the impact that it's going to have on the company, then it's time to take the next steps, which will be to improve the models, to get more training data, to test it with more people, more customers sometimes. So I would say start small and scale gradually. That would be the. Yeah.
Chris Brown: [00:25:45] And I think considering that multidimensional capability of scale. Right. Because you can you can scale a project through volume. Right. You're pushing more through through the solution that you've got. You can scale your project through accuracy. So you're putting more investment into improving the accuracy where the results that are coming out the other side, you're catching more of those results because you're being more accurate. And I think you can be you can scale by adding use cases. Right. So there's all of this multidimensional scaling because like you, we talked to a lot of clients as well. And there's never a shortage of lists. Right. So the you start the whole I want to I want to do I, I want to do I well what do you want to do with I don't know what I want to do with I well maybe we should start there. And so we help and we guide them through that conversation. And you come out at the end. And it's never a list of one. Right. There's always like a laundry list of here's all the things. Now that I've understood what the, the art of the possible is and understood my business, spend some time looking at the business challenges because we don't ask them to look. I go back to the conversation, so we're not asking them to look at the technology. We're asking them to look inside their business and say, if I, you know, we ask them questions like, if I could predict anything in my business that would that would improve, what would it be? Or if I could, if I could classify or if I could if I if I knew this was going to happen, or if I knew how to deal with this situation that has already happened, or if I could create this.
Chris Brown: [00:27:17] And so we're putting it into more words of, of business. I mean, you can still hear we're in the generative classification, you know, prediction world. Um, but we're trying to we're trying to present those questions in a way where where people are taking the technology out of their head. They're they're trying to forget about what they've heard in the media, and they're really focused on their business. And then we can start to look at it. So and then that laundry list again, I talked about before that that's a way to scale. So you might want to you might want to start small on many different things. Or you might want to start small and grow that one thing. So that's, that's another piece of, of thinking about how am I going to scale my, my approach to AI as well. So yeah, your point is your point is really valid. Your point is really valid.
Frankie Carrero: [00:27:59] And I think that we should say that we we, we can never forget that the AI or an AI project is part of a product. The product can be used by employees of the company in different ways. It can be used by customers also in different ways, but it's always part of a product. So these models, these artificial intelligence Projects need to align with the product. This is something that not always happened. You know, as a product owner, you need to know what the artificial intelligence can help you with and if you really need it. Sometimes we don't need to create a really complex project for this, and we have some other solutions that are going to be, um, easier to manage in some ways. And this is just one more thing about this. Even when you have the project in production, the product in production, and it's working, most of the times you're going to see that the models that you generate, that the, the artificial intelligence is going to behave differently, or it's going to be behind the behavior of the of the consumers, of the employees or the of the customers because they change with the time, everything changes with the time, and the model needs to adapt to those changes. So you need to be retraining the models every time and trying to even, uh, you know, try to to to be better than what's going to come. What then what's going to change in the real world?
Chris Brown: [00:29:24] Yeah, you've got to. There is a lag, right? There can be a lag and that that lag needs to be managed. Um, and again how you deploy your your solution, it's complex. Right. It's complex. It's still I always say all the solutions that we go after for a client are poetically simple. From a business perspective, I want to detect an error in this image. So I take an image of a product I want to know is it broken or not? Right. It's it's not hard to understand. Right? It's very easy to understand. The complexity is in in the piece you just talked about there, even putting a solution in the lab can be if you have the right skills, the right capabilities, you know, it can be pretty straightforward. But managing that loop in production that you talked about, minimizing your lag, understanding how you're going to cope with that in, in the world of of of the operations of your business? Is their critically important, non-technical considerations or non-functional considerations that you need to to make sure are considered in that in that whole approach. Good point, good point and well brought up. I'm going to stop us there because I know I haven't been keeping track of the time, but but I think we're probably out of time.
Chris Brown: [00:30:33] I'm going to look over here and find out. Yeah, I think we're out of time. Um, I think it was a good starter for ten for this series. Right. Frankly, I think, you know, we covered that. I likely matters to a lot of people in, in a lot of businesses in a lot of areas. Um, and also that whole consumption consideration, the ROI consideration. Um, and I'm really looking forward to the, to, to the future episodes, to the coming episodes, because I think we've set a foundation, hopefully to dive deeper into some of the things we talked about today with specific topics in specific, either specific industries or or specific technologies that that we're looking at how times are changing. Um, and this is only the beginning, right? It's only the beginning of the series, and I hope everyone tunes back in a couple of weeks, because I'm I'm going to have a conversation with Chris Rivas, a good friend of mine and client of ours at contention. Um, and asking posing the question to Chris of what has I done for marketing recently? Um, so I'm really looking forward to that. A more industry, industry focused.
Frankie Carrero: [00:31:34] Sounds really, really good. And as you said, we just scratched the surface of AI. So we need to deep dive into into that in the in the next episode. So we'll try to I'm sure we're going to have the most interesting people here.
Chris Brown: [00:31:48] That's awesome. Well, thank you for holding my hand through my first ever podcast.
Frankie Carrero: [00:31:52] It was easy. It was easier than you think.
Chris Brown: [00:31:53] It was a well, I can talk, you know, I can talk. It's been it's been said. Uh, so again, thank you for joining me. I look forward to to the future episodes and we'll leave it there for today.
Frankie Carrero: [00:32:03] Thank you very much, Chris. Cheers. Thanks.
Up