Adrian Webb, Executive Chairman of OutcomePath, dives into how AI is reshaping the insurance industry. From fraud detection to operational efficiency, Adrian discusses the balance between leveraging AI for more intelligent decision-making and the emerging risks in a rapidly evolving tech landscape.
In this episode, host Olivier Lafontaine speaks with Adrian Webb, Executive Chairman of OutcomePath, about the intersection of artificial intelligence (AI) and the insurance industry. Adrian breaks down how AI transforms insurance operations, from improving fraud detection to automating complex processes. He discusses the practicalities of scaling AI responsibly, weighing its opportunities and risks.
Adrian also explores the growing issue of deception in the digital age, sharing how linguistic analysis and AI can help companies detect fraud more efficiently. With insights into the future of the trust economy, he explains why the biggest challenge insurers face may not just be technological; it’s understanding the human element in the rapidly changing world of AI.
AI as a Double-Edged Sword: Adrian discusses how AI can drive efficiency in the insurance industry but warns of the risks it introduces, especially in fraud detection and customer relationships.
Detecting Deception with AI: Learn how linguistic analysis, combined with AI, can help identify fraudulent claims and improve accuracy in underwriting, all while maintaining human oversight.
The Future of the Trust Economy: As AI evolves, Adrian highlights the growing importance of trust in a digital world and how companies can adapt to this shift to protect their business and customer relationships.
Adrian Webb
Executive Chairman of OutcomePath
Adrian Webb is co-founder and Executive Chairman of OutcomePath, a company focused on leveraging artificial intelligence (AI) to automate decision-making processes in the insurance industry. With over 25 years of experience, Adrian has worked across both the technology and insurance sectors, leading disruptive companies such as Direct Line and GoCompare, and playing a pivotal role in taking a digital insurance company public in 2013.
Before founding OutcomePath, Adrian worked with various organizations, including financial service providers, where he gained expertise in improving operational efficiency and reducing risks through AI. He is dedicated to solving issues such as insurance fraud by leveraging AI and human expertise to address complex challenges.
In addition to his work with OutcomePath, Adrian is currently pursuing his PhD in Philosophy of Technology at Exeter University, where he teaches the next generation of leaders about the intersection of tech and human behaviour.
Adrian Webb:
AI can do everything, but most companies need it to do something and in most cases, something really specific. And the beauty of what we tend to do at OutcomePath is to identify those specific problems and then build in the mix of orchestration, automation, machine learning, and use at the very last stage thin slices of AI to make those problems better for our clients.
Olivier Lafontaine:
I'm Olivier Lafontaine, and this is Life Accelerated, the podcast for life insurance leaders focused on driving meaningful change through technology, process and partnership. In this episode, I talk with Adrian Webb, Executive Chairman at OutcomePath, a long time financial services leader who spent his career reshaping how insurance works. From launching one of the first internet-based insurers to his role at Go Compare, Adrian has been driving industry disruption for decades now. At OutcomePath, he's helping insurers rethink operations through orchestration, automation, and targeted ai. We dig into how AI can enhance decision making, what linguistic analysis reveals about fraud and customer behavior, and why trust and authenticity remains the industry's biggest hurdles. Adrian also shares his take on near term efficiency gains the risks of generative AI and why the future may require a total rethink of competition in a dead internet world. Let's get into the episode. Good afternoon, Adrian Webb. I am happy to have you on the show. What I'd like to do to start with is talk a little bit about one of the things you do that is not related to your work, which is apparently you have streams on Spotify for helping people sleep. So I'm intrigued by that and why don't you tell us a little bit more about that?
Adrian Webb:
Sure. Well, normally I'm a jazz guitarist, so I've been a jazz guitarist in a band for over 20 years. And literally just two days ago I was playing jazz, an audience just south of London, but I decided that putting jazz on Spotify was not going to sell anything. So what I decided to do was to create a piece of music that would put people to sleep. Sleep problems are a modern epidemic, but actually somebody in the 18 hundreds worked out a way to help people sleep better using sound. And that involves presenting two different sign waves through headphones to each ear. And the difference between those sign waves can become a what's called a four hertz theater pulse, which is the same pulse as your brain naturally settles as you're going to sleep. So you'll notice that when things are out there in the world at frequencies tend, the body tends to try and resonate with those frequencies. It tries to entrain because otherwise there's what they call a moari pattern in there. And I created a piece of music that basically allows the brain to entrain itself into the brainwave that it would normally adopt in the process of falling asleep. And it seems to work because so far 1.3 million people have had a listen to it, and I've earned enough to buy a small sandwich.
Olivier Lafontaine:
Oh, that's amazing. Yeah, I know Spotify, unless you're a Taylor Swift, the odds of making a living out of it is not fantastic, but nevertheless fascinating and I'm actually going to try it in the next couple of days to see if it improves on my sleeping. We have a lot of difficult projects these days.
Adrian Webb:
Well just promise me you won't operate heavy machinery while you're listening to it.
Olivier Lafontaine:
Okay, well I'll keep that in mind. On a more business note, can you tell us a little bit about yourself, your background and what led you to from building an insurance company to developing AI solutions? So tell us a little bit about your background.
Adrian Webb:
So I'm very lucky that I've had a lifetime in financial services and most of that lifetime in the insurance world. But with all of the companies that I was with, they were sort of mold breakers. So the company direct line was the first direct insurance company pretty much in the world, and it was the first company in the UK to use an automated call center. It was the first company to use computers to underwrite a basic personal lines insurance. So I had six years of that company, then two years with Richard Branson building a very innovative bank using the Australian model of offsetting savings against borrowings, and then 13 years building an internet insurance company from literally nothing to a stock market flotation in 2013. Very lucky to do that. And from then actually moved, you would call it in Britain, sort of poacher come gamekeeper or in this case gamekeeper come poacher where I moved to an aggregator company.
I became the board director of a company called Go Compare that sort of disintermediates direct insurance by actually presenting prices transparently to consumers, very disruptive. So I've always been lucky to have that sort of been at disruptive companies that have changed things as they've gone along and shaped the world of insurance. It is really through as everybody's life story is you spot problems during your own life and you hope to solve them at one point during life by changing the world or by changing the way you behave yourself. And that's what I've been doing for the last three or four years is trying to take the pain points that I suffered in my world in the insurance world and trying to find some workable, practical solutions to those problems.
Olivier Lafontaine:
So what are some of those pain points as a matter of fact that you've noticed in your career? What would be key ones that you can think of right off the top of your head?
Adrian Webb:
Well, interestingly, Olivier, the pain points that I experienced during my career, many of them got sold during the course of that career as technology changed. So very often the mediation of the pain is the mediation of technology coming on to do it. So when we started Essure, it was very much using the internet to come, the pain of people not wanting to phone up about their insurance, they really didn't want to do it. It was like phoning up the dentist knowing you're going to have to go and have a painful filling. And so the internet gave people a way to back away from speaking to somebody on the phone to entering all their details online. So in that case, technology solved an initial pain point for the consumer and it also increased efficiency for insurers. The pain points we're seeing now I think are very, very different.
And we're going to say that the acronym AI many times probably during this call, but what AI is doing is it's developing a whole set of different pains for the insurance industry. The biggest ones are the ease with which fraud can be committed and is certainly in personal lines, insurance, travel insurance, home insurance, car insurance enabled by generative ai. Other pain points are how do you maintain good customer relationship management when the inboxes of those people that you used to be able to email and get responses from is now so full of AI generated junk that they've stopped responding to everything and they still don't want to talk to you on the phone. So you've started to get relationship issues that are being caused by what the experts call dead internet theory by the fact that there's so much good stuff out there, but we don't know whether any of this has been generated by a human or just by a prompted ai. And that is causing people to almost revert to what you would call a trust economy of actually sort of meeting people and knowing that they're talking to a genuine live human being, not an avatar or a deep fake or a website that's been created by somebody in their back bedroom and using a multi-agent ai, it's a new set of problems.
Olivier Lafontaine:
Yeah, that's fascinating because when you said, oh, AI is causing problems, typically we think that AI or sometimes we hear AI is technology looking for a problem to solve. But I think the idea that AI itself causes problems through cyber security challenges and attackers using generative AI to enhance their capabilities. That's a good point. That really creates a new set of issues for insurance companies, but companies in general, of course. If we talk a little bit about your current company OutcomePath, can you give us a sense for what you do and what problems you with the technology?
Adrian Webb:
Well, interestingly, people often refer to us as an AI company. Yes we do. But mostly what we do is try to only bring AI in very thin slices. And the reason for that is that anybody who's watching knows anything about AI will know that the bigger the amount of information you give to AI to process the worst, the outcomes tend to be there's more range for hallucination and everything else. So giving AI very specific thin slice problems, very well defined, it's able to increase the cadence, the speed with which intelligent decisioning can be made, but it's only accurate if you narrow that down. So a lot of what is actually orchestration and automation of processes that previously had to come out of a system and have a real person look over a spreadsheet or look over a piece of paper, a lot of that decisioning is now possible with AI if it's constructed in the right way. But AI can't do it all AI can do everything, but most companies need it to do something and in most cases something really specific. And the beauty of what we tend to do at OutcomePath is to identify those specific problems and then build in the mix of orchestration, automation, machine learning, and usually at the very last stage thin slicers of AI to make those problems better for our clients.
Olivier Lafontaine:
And that's amazing. So more specifically, you've been doing things I think in the property and casualty insurance space in helping I guess the broker or the agent walk through the risk analysis, ask questions and identify the risks rather than having it fill out very sophisticated forms. So why don't you, I find that very interesting. Could you talk about that a little bit?
Adrian Webb:
Yeah, sure. And again, a theme we'll return to I'm sure in this is you've got to look at this from the point of view of human beings moving through the world, trying to accomplish things in quite a lot of property and casualty instances. There are complex physical assets, shipping containers, airports, football stadiums, farms, large factories that you can't just enter into a structured data form all of the information it needs somebody to go and visit, walk around and be a human being in that context to be able to understand what some of the risks really are. And what we then realized is that a lot of people put into that problem space end up trying to remember or write down everything they're seeing, return to the office and process that information into a risk submission for an underwriter. And the feedback I got from some very, very senior underwriters, people who have been doing it for 30, 40 years is when it comes to complex risks, specialty risks in particular, and some property and casualty is the information the underwriter gets is never enough to give the best premium that he could give if he had the right information.
And often it's the first broker to put the quote in the risk submission in that gets the price, but it might not be a very accurate risk submission. So all parties end up losing somewhere. So what we did was just say, what about if we knew what a perfect risk submission looked like for this particular underwriter? And then allow a broker to walk around a factory, walk around a farm, take photographs, record audio snippets, type handwritten notes, take a picture of a tractor, whatever it is in any order. So asynchronous data gathering through any channel that's convenient for them in that moment and have the orchestration process bring that information to say, where does this fit into a best practice risk submission for a particular underwriter? So often you end up with much more risk information than the broker could ever have gathered. And I'll give you a concrete example.
I was demonstrating with the chief underwriting officer yesterday that if I take a photograph of a field of sheep and put it through a sophisticated sort of analysis, it can identify things like three of those sheep don't have ear tags on, and that elevates the cross the risk of cross-contamination if they're not properly tagged as livestock. And the field in the top right hand corner looks like there's a bit of broken panel that could cause 'em to escape onto a road and a car accident. So suddenly the top down processing that humans do of trying to interpret their world, ais don't live in the world. So they have to process everything and sometimes get richer information out as a result of that.
Olivier Lafontaine:
Yeah, and that's amazing. This being a podcast about life insurance, I thought since our preparation conversation, I thought about a few scenarios, but I can think of how this would be really interesting in group insurance for example, where a similar scenario where the agent would go around the company and ask questions, take pictures, and try to gather information in a way that would be a bit more unstructured and natural than if you were to fill, currently they fill certain questionnaires and spreadsheets and that's transmitted over to actuaries who then price for the group. But I can see how, and we might have a much more detailed view of the risk if a bit more organic and perhaps these types of technologies can help in giving more accurate pricing or better pricing based on true risk as opposed to today a lot of it has to be standardized. So I can see a lot of interest there. Don't you think?
Adrian Webb:
I agree with you. I'm going to challenge you slightly there if you don't mind. So one of the things I think is really interesting about the life market, and we've spoken about this before, but the life market, if you imagine that you could gather the information to have perfect risk assessment in the life market is effectively it would end the life market because the life market relies upon the fact that if you know who is going to die within the term of their insurance, if you had a perfect knowledge, genetic knowledge to be able to predict that accurately, the market would cease to exist because the market is about pooling risk on the basis of not of imperfect knowledge. As soon as you have perfect knowledge, insurance disappears. If you know who is going to crash their car, who wants to insure that person, it becomes effectively you have to charge them the premium that is the cost of the accidents they're wanting to have.
So I think for life insurance, instead of looking at more accurate risk assessment, although I take your point on group, but if it was an individual case, I would be using AI to make your customers live longer and to help them to live longer because everybody who goes past the last day of the term increases the profit, but the customer's not going to be angry. He's not going to be like, I could have died and gone more from this. Everybody wants to live longer. So there is actually alignment of interests in life insurance with perhaps using AI to understand customers better and to educate them towards better, longer, happier lives. I think that is where the real value might lie.
Olivier Lafontaine:
That is very valid and that's what discovery is doing in South Africa. And I think they've expanded in the us they've sold their technology in the US and elsewhere around the world with the health devices to collect information and give you points when you go to the gym. And so they try to align their interest with your interest because I think nobody is ever disappointed to have bought an insurance policy and then not have to use it. And nevertheless, I think you're right, adding AI to the mix here to kind of align the interest of the insurance with the health of the client and longevity of the client, I think that's a great point. I do think in the end, even for underwriting and classifying well, I agree that perfect risk, that would be an interesting perfect risk assessment and predicting perfectly who is going to die and who's going to survive that.
I don't know if we're going to get there, but it is still interesting to see if we can pool the risk better. If we can create groups that are more refined than today, which is pretty large, we have an individual life underwriting we have, if we split groups in 10 or 12 different groups, then that's already a lot of work and effort for underwriters. If AI can support that work by identifying different characteristics. I think those are all fascinating things and that's why it's interesting to hear entrepreneurs like yourself, technology that will advance this in the future.
Adrian Webb:
The other thing I would add is that for life insurers the next three or four years, a lot of the progress in insurance industry is going to be about efficiency because inefficiency in insurance companies tends to be systems that have places where information has to come out of that system and be seen by a human then go back into that system and we would call that the sort of operational compensation layer. Ideally it would be straight through, but it has to come out and then go back in again. Efficiency is going to be over the next three or four years, getting rid of operational compensation and actually finding ways to build intelligent decisioning that is pretty accurate into the core systems of the company because then it's only electricity, it's development, and then thereafter the incremental cost is electricity, but the incremental cost and the slow down, the lag of things coming out of systems and having to be checked and then manually put back into systems, it actually, there'll be an efficiency showdown, but I suspect this will be arbitraged out to zero within three to five years.
That actually the efficiency gain is a very short term one, but if you are competing hard today, you don't have those efficiencies in the next two or three years, the industry could reshape pretty quickly if one player is much more efficient than the other, which is why I admire really all of the weather that EOF is doing to bring that in to be able to increase that base efficiency of companies to an optimum level. In the end, it might be that they can all compete on an even plane and then people will be looking for new differentiators to be able to get incremental value from their business. But at the moment it's tooling. AI tooling is what everybody is doing. And so a lot of our work is really on automation orchestration, AI tooling get the efficiency up, but it's not the long-term solution. It's just a short-term wave
Olivier Lafontaine:
To keep up basically with the rest of the group and exploration, there's still, as I think a lot of what we hear from private equity firms is demonstrating that you're effectively getting some economies, efficiencies or doing more with your existing team by using ai. There's all the promises and it's very exciting to see that, how do you translate that into numbers that can be presented to board meetings and things like that. That's what remains to be seen, I think in a lot of cases, but it goes so fast that we have to move forward.
Adrian Webb:
The long-term vision of A CEO in the life insurance industry shouldn't be thinking that AI is going to solve all our problems in five years. It's going to be saying AI will help us with some of our problems in the next two or three years, but then we're actually going to have to rethink what competition looks like in a dead internet world where people have stopped looking at their emails, where people are quite distrustful of stuff that comes to them through a screen. They don't know what is real and what's trust, those sorts of trust issues. You notice how nobody's talking about blockchain anymore, whereas if you wind back three years, every was crypto and blockchain, everything is now just ai. But the trust economy was something that blockchain was meant to solve. And in fact, what AI has done is it's pulled the rope from underneath the trust economy because you do not know who you are speaking to. You don't even know whether I am real or you are real. We could be deep fakes. We're not folks, but we could be deep fakes and it's really hard for people. No global CEOs have been fooled by these things.
Olivier Lafontaine:
That brings me to another topic that you've been talking about that I find fascinating, which is I've never heard from other people, but you talked about linguistic analysis as being I think a way to understand through the tones and I think the word selection that people make interpret their state of mind or if they're lying or things like that. Can you talk about that a little bit?
Adrian Webb:
Sure. Again, this is something which is sort of AI facilitated, but ultimately it really comes from people taking time to understand how humans work, how humans work in the world. And the one thing we know from linguistics is that I can sit here and lie to you and I can use words that sound perfectly rational to our audience. And I can say, I'm actually the king of England and I have slightly more money than Elon Musk and I have a unicorn in the field. You can hear me saying those words in roughly the same way as the other words that I've said, but semantically what those words mean, it's not true. You can tell. So the ease of lying, which everybody says, well, it's actually quite easy to lie. We all do it all the time when we say, what's a lovely dress grandma, or phrases like that to protect people's feelings.
But ultimately the language with which we frame even our lives comes from a community that we are, we have grown up in and moved through and learn how to communicate appropriately with. Now, the problem with lying is a cognitive one in that when I am telling you the truth about what I did this morning, I just read from memory and I have very specific what they call spatiotemporal anchoring. So I was walking up the stairs when I saw the sock that I needed to put into the washing, I stroked the dog and then we did this. I've got that sequence because I'm reading it from memory. Somebody who is lying has to make up a continually growing snowball of falsehoods to be able to support the central dissection, and in doing so, they start to exhibit linguistic changes. So you and I have spoken before about lexical density.
So lexical density is the difference between the ratio of information bearing words and sort of functional words that wrap around information bearing words. So if I say to you this morning the crash occurred at exactly 1110, I know that I was looking at my watch and I was driving past that has a very high lexical density. I'm giving you lots of information. If you say, well, it sort of came out of nowhere, and I'm not quite sure those words there, there's very few of those that bear information. So when lexical density changes very often that people are drawing more on fabrication than they are on memory, and that's something that linguistics can use to pick up. It's not really an AI thing. And the reason I say that is because AI and machine learning of everything else requires huge pattern sets to be able to learn from.
And in fact, cases of deception in things like fraud are actually very, very tiny slices of the whole corpus of an insurance company's claims lines. And so what you do is you look to how the humans behave when they're lying, they have to make up stuff, is making up stuff easy? No, it's really, really hard. How do we exhibit that in our language? We start to become more hedging. We use more words like perhaps and maybe and all that sort of thing. People tend to use those sorts of phrases when they're fabricating, but not when they are recalling from memory. And we've identified we've got a huge pattern of things like lexical density, semantic bleaching, hedging patterns, disfluencies, all of these things that tend to come out in deception. And there's a corpus of knowledge about this that's been written over the course of 80, 90 years. So it's now fairly refined, but you can now use AI to do some of the final decisioning by recognizing say, speech to text to enable you to tag the text and do it a bit better.
Olivier Lafontaine:
Would that be useful in the context of let's say a telephone interview for underwriting for example, or fraud detection during claims and those types of things? Is that the sort of application you would think would be useful or would be using this sort of technology?
Adrian Webb:
Yes. The thing that all insurers are trying to get towards is real time indicators of deception, likelihood. It's the one thing. Now what has happened so far is AI has swept the board in lot of personal lines by doing sort of pattern matching between claims and policy inception over very large data sets when it comes to being able to have an indicator of does this person seem to be exhibiting the linguistic signatures of deception more than of truth telling that if you could indicate that live, what you do is not turn down the claim. You just say, here's the cohort of claims that we're going to pay a little bit more human attention to. We're going to look deeper at this set because those are the ones that have those indicators. It may well be that's just the nervous person. So you never use these tools to void a claim. What you use them for is to narrow where you put your focus so that you become much more efficient in protecting your loss ratio ratio
Olivier Lafontaine:
And you spend more time on those rather than sprinkle your effort across a larger pool of claims that might in the end not result in anything being flat. So I think that's really interesting in terms of improving efficiency, as we talked about before,
Adrian Webb:
It is triage. It's exactly like going to a hospital and somebody says, is it a broken leg? Is it an internal pain? Which of these three main things, is it a bleed? Which of those main things are they and whereabouts in the hospital should we direct them? So real time fraud detection is going to be like the triage nurse who is not a doctor but is able to divide up those presenting into categories where you send them to the doctor and says, oh, the doctor says no is absolutely fine. No problem at all. That's all good. But you want to make efficient use where not everybody coming in is having them see every doctor to decide what's wrong with them. You're triaging them early. And I think that is where claims efficiency, particularly as gen AI fraud starts to become a real impact on our business, which I think across all insurers, everybody is afraid of Gen AI because insurance companies move much slower than the frauds of this who are using it.
Olivier Lafontaine:
I agree. And actually insurance companies, but also there's a growing pool of scholars and experts that express some concerns over the growth of AI and I guess the spread of AI in our day-to-day life. And I think to some degree you share some of that, or at least you're intrigued by that. I think you're chair of philosophy of technology at Exeter University, if I'm not mistaken,
Adrian Webb:
I'm far from being chair, so I'm not a professor, but I'm doing my PhD there and I do teach some of these subjects to young students. So it's an area that definitely interests me academically in my studies. I thought that
Olivier Lafontaine:
Your point on the amount of things that can be generated, lies that can be generated with generative AI and insurance companies being worried about this, I think that is a common theme in scholars and thinkers. You being a student of technology, of philosophy, of technology, what are your thoughts in terms of the dangers, I suppose, for gen AI growing with not too much control around it?
Adrian Webb:
Okay, the big issue with gen AI is if you think about the ways that large insurance companies have to operate, they operate in regulated environments. They operate within boundaries that are looked at by consumer regulation, by reserving regulation, the PRA, all those things, they're under huge scrutiny, which means that the process of making changes that are fundamental to their business will go through a lot of scrutiny before they go live. So the gestation period for changes that are systemic in an insurance company is years. It's years. The problem with gen AI is that for 20 pounds a month, somebody can sit in a back bedroom and can generate pretty good passports, pretty good, fake photographs, pretty good, all those things. Now that creates an asymmetry of ability between the insurance company that's trying to detect and the fraudster who if it doesn't work, they just try again tomorrow.
If their new technology comes in, they can use it today, an insurance company, nobody's ever an insurance company said there's new technology, let's have it in by this afternoon. But that is exactly what happens in the fraud world. And so you have this ability of fraudster to be able to knit together any microservices ais to knit together any multi-age agent thing. And all they have to do is test it and see if the weakest insurance company pays out the claim and if it does iterate, keep iterating until they close that gap and then just use some new technologies to do it. And insurance company cannot operate in that way. So we've got this arms race where one side is developing bombs to try and solve the problem that take a year to create, and the other side is going with pistols that take them a morning to create. It's a major headache for everybody in the insurance world, particularly property and casualty, where evidence tends to come from photographs, from documents, from invoices, a lot of keynote speakers are making a really good living out of generating completely fraudulent businesses, and by the end of the keynote speech, they're already trading. It's so easy to do that in insurance going to, it's going to be a while before that washes through because there's a discordance between the abilities of the two sides.
Olivier Lafontaine:
And you're going to see frauds around disability, frauds around accidents and all kinds of things that is going to get more and more difficult to identify the fakes, whether they're fake police reports and things like that. So insurance companies will have to adapt to that. And you're right, the speed at which the fraudsters are operating and the technology is evolving, there is not at all aligned with the speed at which the insurance companies are able to adopt and defend against that. So it's a very difficult, and I think a lot of people have flagged this. I mean, a lot of industries are facing this, but insurance in particular because of its very nature, is about managing risk. And so it cannot make bold move very rapidly. So it's an interesting challenge for us and providing tools to help with that.
Adrian Webb:
If we could regulate fraud spurs, the problems solved really quickly. If you said each gang of fraudsters has to have a risk committee and an audit committee and they have to go report to the regulator every so often, then there wouldn't be a problem. The problem is that as an insurance company, you have to do that, and it slows down your ability to be able to react to technologies that are moving faster than social norms, legislation, regulation, and the internal ability of big systems to be able to cope with them.
Olivier Lafontaine:
And we get a lot of insurance executives on this podcast. It's actually interesting that now for this one, we have somebody that comes more from the technology and vendor side, although you do have some background in insurance companies as well. So it's an interesting mix, but a lot of the executives and insurance companies are a little bit stuck with trying to even understand what is happening and how to navigate. The CIOs are bombarded with questions. Obviously, all their employees want to use ChatGPT or CLO or Gemini. They want to be automating their workloads. They want to use this to be more efficient. Actually, there is a growing body of people that is expecting efficiencies to happen and therefore indirectly puts pressure on everyone trying to go in the wild West and explore those tools. But what does it mean exactly? And we see these incidents from time to time happening, it's just going really fast and it's difficult for the CIOs to navigate. So I agree with you, it's going to be an interesting next couple of years to see what comes out of this wild West to defend and honor the duty of the insurance companies,
Adrian Webb:
The duty of insurance companies. One of my colleagues at Exa University, a doctor called Brunette Tyler, has written on the philosophy of technology quite extensively, and one of the things he points to is a writing from back in the early 1990s by an American called Verna Vge who said that one of the problems with increasing technological capability is that if we try to regulate very fast moving technologies, then the unregulated versions will always outperform the regulated versions, which is fine in the pre-internet world because basically every country has its border controls to stop those technologies or regulate it. The internet cannot be controlled in that way. Certainly China can seal off certain elements of it, but VPNs or everywhere, everybody knows how to use them. You see everybody on holiday using them. So everybody who says, I will subscribe to a regulated technology, is going to subscribe to a slower technology that moves less fast than the unregulated version. And one of the things that my colleague has said is that the problem with that is that it means there is always a pulling capitalism towards the unregulated technology because that will be faster and it will generate incremental profits over those who are using a regulated technology. And that's a real problem, a problem for ethics. It's a problem for governments, and even if one government brings it in, what they'll do is they'll hobble their own country compared to another country that's operating in an unregulated way, as we've seen with North Korea.
Olivier Lafontaine:
Yeah, a hundred percent agree. So listen, it is fascinating. I think I really enjoyed the conversation, Adrianne. Perhaps we'll have a chance to explore these topics further. It would be nice if perhaps in six months you can join us again for a follow-up on what has happened in the next six months. I find, well, if I said this six months ago, I think we would've found that there's a lot that's evolved now these days we're talking about agent tick AI and agents are a big thing. Who knows what's going to happen in six months. But yeah, really enjoyed the conversation. Thanks for your time.
Adrian Webb:
I really enjoyed it, and maybe we should put in for next week, given the speed of change, but we'll see.
Olivier Lafontaine:
Yeah, agreed.
Adrian Webb:
Thank you Olivier.
Olivier Lafontaine:
Adrian's career has always been about solving problems in insurance, whether by challenging the status quo or applying AI in practical ways. What struck me the most was how he balanced short-term efficiency with the long-term challenges of trust, fraud, and rising competition in a digital first world. From rethinking risk assessment to using language analysis for fraud detection, Adrian proves the progress in insurance demands more than just tech. It takes an understanding of people too. Thanks for listening.
Don't miss out on powerful insights from some of the top executives in life insurance. Sign up and get notified whenever a new episode comes out.