Amanda Turcotte and Darwin Larrison talk about how their organizations are approaching AI adoption in life insurance. From CRM integrations to call center enhancements, they share how technology is improving both operational efficiency and customer service. Discover how leaders are managing risk and finding opportunities for growth.
In this episode, host Olivier Lafontaine speaks with Amanda Turcotte, SVP and Chief Actuary at Amalgamated Life Insurance Company, and Darwin Larrison, VP and Chief Information Security Officer at Modern Woodmen of America, about how their teams are navigating the changing landscape of artificial intelligence in life insurance.
Amanda shares how her company is applying tools like Amazon Q and Intelligent Document Processing to streamline customer support and data handling. Darwin explains how governance frameworks, vendor partnerships, and licensing decisions are shaping how AI tools like Copilot are being deployed securely and responsibly.
Throughout the session, Amanda and Darwin bring their unique perspectives from actuarial and security leadership to highlight what AI can realistically deliver today, and how insurers can prepare for what’s ahead.
Adopting AI in insurance requires more than tools. It demands structure, governance, and cultural buy-in.
Licensing strategies and vendor partnerships can quietly shape how innovation spreads inside an organization.
AI can help small carriers scale smarter by turning everyday data into operational advantage.
Darwin Larrison
Vice President & Chief Information Security Officer at Modern Woodmen of America
Amanda Turcotte is Senior Vice President and Chief Actuary at Amalgamated Life Insurance Company. She leads strategic actuarial functions, product development, financial modeling, and reinsurance strategies. In this role, she is charged with guiding the firm’s risk management vision and enabling innovation while maintaining financial strength in a modern insurance landscape.
With over twenty years in the insurance industry, Amanda has held senior positions, such as Senior Business Development Actuary at Guy Carpenter, where she provided reinsurance solutions across life and annuity products in the U.S. and Canada. She also founded Turcotte Consulting, advising InsurTech startups on product innovation and technology integration, a clear reflection of her comfort at the intersection of actuarial science and digital strategy.
Amanda holds the Fellow of the Society of Actuaries (FSA) designation and membership in the American Academy of Actuaries (MAAA). She completed a double major in Economics and French Literature at the University of Alabama, bringing interdisciplinary thinking to her technical work.
Known for combining analytical rigor with entrepreneurial insight, Amanda advances accessible, values-driven insurance solutions for working individuals and their families.
Darwin Larrison serves as Vice President and Chief Information Security Officer at Modern Woodmen of America, a member-owned fraternal financial services organization founded in 1883, offering life insurance, retirement planning, and community-focused member benefits. In his leadership role, Darwin oversees cybersecurity strategies, governance frameworks, vendor risk, and data protection practices for a complex financial services organization.
Darwin is widely recognized for establishing AI governance practices and managing enterprise licensing decisions, especially with tools like Microsoft Copilot, and crafting risk frameworks that balance innovation with regulatory compliance and cultural alignment.
Over his career, he has built robust IT security teams and implemented best practices in data governance, information protection, and third-party oversight.
A thoughtful and forward-looking leader, he combines experience with a mind focused on the future, preparing his organization for the opportunities and challenges of AI-driven change.
Olivier Lafontaine:
I am Olivier Lafonatine, and this is Life Accelerated. The podcast for life insurance leaders focused on driving meaningful change through technology, process, and partnership.
This special episode features a live recording from Equisoft Summer Summit. In this session, I'm joined by two industry leaders who bring a practical and behind the scenes perspective on how artificial intelligence and innovation are shaping our industry. Our first guest is Darwin Larrison, the VP and Chief Information Security Officer at Modern Woodmen of America, where he leads with intentionality, especially with his approach to AI governance and vendor adoption.
Our second guest is Amanda Turcotte, (SVP &) Chief Actuary at Amalgamated Life Insurance Company. She shares her actuarial expertise and InsureTech experience, including how they use tools like Amazon Queue to boost efficiency and enhance the customer experience. All in all, we cover licensing challenges, internal education strategies, sentiment analysis, and CRM enhancements.
Finally, we learn how both companies balance innovation and compliance. So let's get right into it.
So, for this session, I'm very happy because I don't have to come up with the intelligent thoughts. It's going to be my colleagues here from the insurance companies. And I'm sure these were not your first-ever AI sessions. There's always all sorts of thoughts. But what's going to be interesting in the next 45 minutes to an hour is we're going to hear from people who work in actual insurance companies and how they think about stuff, which is always interesting. So let's start with introductions. Again, Olivier Lafontaine, and then we have Darwin, or let's start with Amanda, maybe. Go ahead. Maybe tell us who you are and who you work for.
Amanda Turcotte:
Sure. Amanda Turcotte. I'm ChiefActuary at Amalgamated Life Insurance Company. That's a small little insurance company in the state of New York, but we issue business nationwide. I've been at Amalgamated for almost nine months now. Prior to that, I was actually in InsurTech for about eight years. I left AXA Equitable, where I was head of product pricing and underwriting for group benefits, and co-founded two companies, two startups, raised a couple of rounds of funding, had a consulting practice that catered to InsurTechs, and then decided to come back to corporate, which has been really, really lovely over the past nine months.
Olivier Lafontaine:
Awesome.
Darwin Larrison:
Hello, everybody. Darwin Larrison, Chief Information Security Officer and Vice President, Modern Woodmen of America. We're based out of Rock Island, Illinois. I've explained the Quad Cities probably six times, so I'm not going to say Quad Cities anymore, but they're part of the Quad Cities.
So just real quickly, prior to Modern Woodmen, I've been there eight years probably now, part of that was two years at UnityPoint Health and, I don't know, we had like five or six hospitals then. And then prior to that, with FBL Financial, and I don't even think they're a publicly traded company anymore, but it's Farm Bureau, most people refer to the Farm Bureaus. Out of the west, similarly I was with them probably 15 years for my first IT job.
Olivier Lafontaine:
Awesome and we have an interesting mix. So we have the actuarial side and then the security side, I feel we're going to have some interesting calls from both ends of the spectrum. So let's start in the thick of things. So can you describe your company's overall approach to AI progress? Have you started certain projects? Have you finished certain projects? So, maybe we start with Darwin.
Darwin Larrison:
Okay. Okay, I thought about this question and I would say cautious and deliberate, and it's because we're an insurance company. And one of the benefits of being cautious and deliberate and also working in information security/compliance, even though it's another department, it's hand-in-hand or hand-in-glove, whatever you want to say, cautious and deliberate keeps you from stubbing your toes and making mistakes, et cetera. So insurance companies have traditionally been slower in adopting technologies, they let them bake, by letting them bake, you tend to have things go a little better, allegedly.
But in this case, and I thought this was super interesting because it was so validated in the previous excellent presentations that I just saw, I felt like two or three years ago, and I'm not sure the exact timeline, when the OpenAI/Microsoft partnership got dropped and it felt like it was dropped in my lap. What happened was it opened to the outside world and we had a local company that actually had some employees put in some private information or information they shouldn't put in there, and they were terminated. Because that is a company that has a lot of secret information, but it really did happen in a public AI model. So anyway, that pressure's down. And the other thing they said was, and the interesting thing is that there's not a lot of security mechanisms anymore or right now, controls. They're in place now, but they are.
But anyway, that starts, where we got lucky was it came in with OpenAI or where it really blew up was with OpenAI and Microsoft's partnership and it went into our, what we always say, our walled garden, but our subscription. So we got to adopt it rather quickly and allegedly safely, even though it was very uncomfortable, we were able to use the Copilots and that was within Word, within Outlook, et cetera, if you had the right license, which is something that I learned, that not everybody gets a license because licenses are expensive, but we could talk about that later.
So, I'll stop here right now on this just to say somehow through that and through having to send out an email that says, "Don't put our private or confidential information into public AI," that was the first control, et cetera. And then I've had to learn, learn, learn, and anyway, I kind of run the governance program at our company, and through that I've had to learn, learn, learn and that's what I've been doing.
Our company now has probably 30-plus AI-infused, they'll say augmented, I like infused, too, AI-infused applications so the vendors have been bringing in AI, and we've been basically approving them if they can answer certain questions through our governance process. We're also doing some initiatives that we'll talk more about later, I think, probably. But yeah. So is it buy or rent? Buy, rent, build or blend? Is this something that they say? We're definitely a blended shop.
Olivier Lafontaine:
Amanda?
Amanda Turcotte:
Yeah. So I think like a lot of insurance companies, large and small, we're definitely on the smaller end of the spectrum, the thought of going to InsureTech Connect out in Las Vegas in 2022, or even now to Insurtech Insights here in New York in 2025, there are hundreds of vendors that [inaudible 00:06:16] have widgets that are impossible for a small company to fit them all.
And similar to what you were saying, we rely on our large vendor platforms, whether it's AWS, whether it's Salesforce, Microsoft. Just like you were saying earlier, how you're embedding AI tools in the product that you're providing insurance carriers, and we really appreciate that vetting process. So whereas I think there was an opportunity probably, maybe not as safely, to start just learning any and every AI tool back in 2021, 2022, by saying we might be a little bit later on the pathway, we're really leveraging the research that a lot of our vendors have put into AI, what are the appropriate tools for our company, to start bringing those to our colleagues.
Olivier Lafontaine:
And if we talk about specific technologies or tools, you have a couple of examples, maybe we start with Amanda since you had-
Amanda Turcotte:
Yeah, it's a hot potato now.
Olivier Lafontaine:
Yeah, I think we're going to go with Q2.
Amanda Turcotte:
Yeah, exactly.
Olivier Lafontaine:
So, specific tools and examples of things that you've done and how that worked out?
Amanda Turcotte:
So we are really starting our journey more in our customer-facing part of the organization. So in customer service we're leveraging Amazon Q in our call centers. So that allows our representatives... So we don't have AI speaking to customers directly, but it allows our call center reps to be so much more efficient because rather than scrolling through hundreds of documents and correspondence that could have been passed back and forth between our company and the client over years of a contract, now that these AI tools can filter through those quickly and allow the representative to give answers to clients faster and really improve that customer experience.
And we're also leveraging the Amazon Intelligent Document Processing, which allows us to just ingest the mountains of unstructured data that insurance carriers receive daily in a really efficient way and categorize those much better.
Olivier Lafontaine:
Is this something that you've completed? Are you in the process of viewing it? Is it starting?
Amanda Turcotte:
Not completely finished.
Olivier Lafontaine:
Okay. So would it be more like a POC as far as you could tell, or would you call it POC? So is this a proper project that is going for-
Amanda Turcotte:
I'd say beyond POC, so deploy, but still in early [inaudible 00:08:47].
Olivier Lafontaine:
Okay, good. Darwin, any specific tools that you-
Darwin Larrison:
Yep. Non-vendor, well, vendors are helping, but we're using CRM and we had a big project. So our big one thing that we are working on in building is related to our CRM, and it's also an online system, it's also related to that third party I mentioned earlier, and I'm not trying to go off on a bender, but that's what it is, big platform.
So we had a big project related to data and getting the data ready and pulling it from the various data stores around the company because you know it's all over the place in various killer apps from the 1970s, '80s, '90s, 2000s, et cetera. Pulled it all in, Customer 360 Project, pretty good name. And so that's now the source of truth is customers. Who are the number one users of it? Not the customers, but our sales people, our sales force throughout the country.
And so there's tools being built that they're not earth-shaking, but boy, are they just obvious. Hey, this person's a great potential prospect for cross-selling or something, or just all kinds of stuff related to data on our customers or about our customers to help our sales reps. And then there's a bunch of efficiency things with that, too, which is taking notes or where they make a call, it listens to the calls to not record the calls, but make notes from if they sent them a change of address form or something, and it makes notes for the reps so they don't have to make those notes.
But that's that. And there's a couple other things they're doing within there related to the CRM. I'll try to think of some of the other ones, but it's all CRM-related. All sounds great. The rest of it has been just like I said, kind of the Copilots and stuff. And you'd think that I would want to use Security Copilot and I'm not using it yet because I don't want to pay for it yet. I've got a new young person just out of college, and I want him to be able to learn how to do tickets and investigations and not just to be they take him from college and put him into a system that's going to let him use his scripting and things like that for a while to solidify his knowledge before I let AI do his job and let him or her, the persons move up in their positions or go to a different type of position, security prompt engineer or something or whatever's going to happen.
But that's basically, it's vendors coming in, augmenting systems. It is us buying some one-offs, but that's been very few, it seems like, or renting. It's more renting, not buying, renting or leasing or whatever. And then CR.
Olivier Lafontaine:
Out of curiosity, anybody else is battling with this issue of licenses and determining, is everyone in the company getting a license? It's expensive to get everyone in the company. Amanda, yes?
Amanda Turcotte:
Yeah, like I stated before, I agree with the Copilot to deployed it, but it is very expensive, and there's the Microsoft Copilot in a recurring [inaudible 00:11:45]. Yeah, and you have to be, I guess, sensible to gain an enough traction and start to see the value before it could be broadly worked out [inaudible 00:11:53]. We're selectively giving it to certain pockets so that they can be the early [inaudible 00:11:58] users that can go to then be champions in [inaudible 00:12:02] spaces so that it becomes more compelling [inaudible 00:12:05].
Darwin Larrison:
Can I add something to that? So, our strategy for the licensing has been squeaky wheels strategy is what I'm calling it. That means when somebody calls up and says, "Can I use Grok? Can I use this? Can I use that?" I say, "Well, I use Copilot." Well, they don't have a full license. So that's what was happening with me and I'm like, "How come I don't know this, that they don't have a full license like I do?" And that's a whole other story.
And long story short, squeaky wheel gets a license because they've got a good use case and they're already doing it. Then we give them Copilot and they say, "You know what? That's not bad." Whether it's been code generation or whatever the other use case was. When was the actuary? Are you an actuary?
Amanda Turcotte:
Yeah.
Darwin Larrison:
That was an actuary, right? I thought you were.
Olivier Lafontaine:
You needed the pricing instead of the actuary. That was perfect.
Darwin Larrison:
Yeah. So we gave him the license, but that is an issue and it's a darn shame. One thing I want to say was been working with LIMRA and LOMA on the security for AI thing, which is intimidating to do, but I get a ton of research and guess what I use? AI. And it was awesome because it's very complex subjects, it puts it in what I call a middle level of understanding. You know how it's notorious for bullets and everything, but it gives you everything you need to go to the next level. And so I'm a huge fan. Our reps also love the email where it'll regen their email. I now look like a PhD in English via email and Word documents. When I write performance appraisals and everything, I look like a PhD in writing.
Olivier Lafontaine:
Did you notice an increase in length of emails over the last year or so? All of a sudden, these gigantic-
Darwin Larrison:
Mine have gotten smaller. Are you kidding me? Oh yeah. It takes, it totally eliminates. It gets you right to the rut. It eliminates number or text big time.
Olivier Lafontaine:
But I feel like a lot of companies are battling a little bit with measuring adoption or value out of Copilot or... We use Claude, we actually use both, and that's a challenge in itself. There are some people use ChatGPT and so on and so forth. But you give the license to people, it costs, I forget what typically $40 a month or something? You have to take it for the whole year. They all have pretty much the same model. Then next thing you know you have 400 users with this $40 a month. It's a pretty expensive proposition. Have you been able to measure value other than just saying, "Yeah, it's really useful. It writes my email better and stuff."
Darwin Larrison:
No. It's anecdotal. It is totally anecdotal. Now, I shouldn't say that. I would get in trouble with the LIMRA and LOMA brethren because they get create an ROI mechanism and stuff. Honestly, my job at the governance and compliance groups and stuff that I'm running, I'm not doing the ROI thing and I'm purposely trying to stay out of that. That's somebody else's battle to fight because I don't run the AI strategy. But I will tell you that I'm very interested because I want the ROI. And the ROI is anecdotal, it's me standing up here going, "Man, it's cool how it rewrites my emails, how it does my Word, how it helps me research massive amounts of information and helping me to learn stuff fast." It's just going to change everything. But yep, no, nothing yet, didn't help.
Olivier Lafontaine:
And out of curiosity, Amanda, have you guys deployed any AI tool like widespread to everyone? Is it on the squeaky whee, gets the license type models?
Amanda Turcotte:
Not like a Copilot or anything, that is still on a squeaky wheel basis. I'm a squeaky wheel. It's great.
Olivier Lafontaine:
Did you get through in the end? Did you get your license?
Amanda Turcotte:
Yeah, I have my Copilot license, which is not as cool as a pilot's license. I think some of the return is almost immeasurable. You were talking, I was talking in the prior session about how AI is going to or has the opportunity to help us continue to do the work we do every day, even as we have a wave of retirees and real insurance experts leaving our industry. And I think that's going to be a big challenge for our industry. Well, I think one of the real opportunities is to start recording and transcribing every single Teams call we have because there is so much information that is transmitted in these Teams calls that is never recorded if you're taking notes.
However, then when you're using copilot to say like, "Hey, how was our policy admin system manage a termination that was three months ago and we just got notice of it today?" Well guess what? You had a discussion on that or somebody else had a discussion on that three months ago, right? So what's the return on that? Well now we had the opportunity to have consistent company processes and not have regulatory issues. We have faster training of new employees, we have the opportunity to set up agentic agents. You can learn from those transcriptions. But that is very challenging to measure and say what's my return? It really is more of a, I think investment, and we'll see the return in the future as we build this trove of data I guess.
Olivier Lafontaine:
Or probably measure that you're losing ground if you don't do it.
Amanda Turcotte:
Right. Yes, exactly. That's-
Olivier Lafontaine:
However, I think what's interesting, what I think we'll see, some vendors are at least giving you tools to measure who's using. They can't tell what's the value, but at least knowing if you've given licenses to people, it'd be nice to know that they're using it.
Darwin Larrison:
That's a good correction. I did say nothing. We do have the ability, and I can get into it later if we talk about security and compliance and overwatch and things like that. We do see what people are using and not funny story, but a pretty good story related to that. An eye-opening story of how much people were using unauthorized AI after we said no.
Olivier Lafontaine:
What?
Darwin Larrison:
Yeah, they were. Anyway, we do get that. We could see the prompts, we could see the usage.
Olivier Lafontaine:
So at the very least you can assume that if they're using it, they must be getting some value out of it. So that's at least [inaudible 00:18:13].
Darwin Larrison:
Unless it was some goofy, there were some of that too, the goofy AIs that had nothing to do with business.
Olivier Lafontaine:
All right, so if we move on to the next question, I think we're going to probably have to, we're not going to do all the question, but I think we want this to be interactive. So please interject if you want to. We've talked about the results a little bit. I want to talk a little bit or ask you about how do you feel about directed pace of change of those tools? We talked about Microsoft Copilot, it's the obvious one. Everybody uses the Microsoft suite, it's there, the little pink whatever is the name of that icon. Pretty obvious, people get some value out of it.
Now, there's hundreds of these vendors that you're going to see Insurtech Insights in Las Vegas or if you go to any conference or you google it, how do you even keep track of that? And if you ask any of those AI tools to tell you which are the trending technologies, every month you get a different list of vendors. So what are some strategies that you can use to kind of keep your head above the water in terms of evaluating the potential for these things, choosing to license them or not, short of the squeaky wheel. And then educating. We talked about this and the preparation. How do you educate people around the possibilities out there?
Darwin Larrison:
I will try to be succinct here. We did a very good job of I believe creating a governance program. And that's how we kind of do this. And I'll explain it real quickly, as quickly as I can. Got a group together, got permission to do it, started an AIGG, AI governance group, and we just started kind of skunk-working a process. Members of that, legal, compliance, vendor management, IT, information security. And through that, especially the vendor management, we're able to inject our requirements to our people. When they request AI use cases, they get a form, they fill it out. We went and had a young person from IT go off to learn and get certified in AI governance or AI whatever. So we could call him an AI expert. So they fill out these forms, the initial forms, and then he does more in depth, then he has a little graph and he says where he thinks he's at on a risk level. And the risk level has to do with customer impact or employee impact. And those are negative impacts and that has to do with the fairness or whatever you want to say.
So anyway, we built a really nice program, anything that's rated high has a control plan and those are all online things that are filled out and we assist with. Very good policy written by the lawyers in compliance mostly, I wrote about a third of it, then I backed away and let them have fun. But it's very good, I think. It will survive auditability, but we have to be strict about not using AI that's not authorized. So that tool I talked about that monitors AI usage blocks. So basically we have one big giant block list by default and then we whitelist things that we approve and then we track them.
And changes and things like that, the main tracker of it is our enterprise architect group, and they're kind of in charge of tracking versions and everything, you have a special software. So that's another one of those where I say, "You got it, thank you very much." Because I feel like I'm herding cats enough. The ROI and things, like that's the business people, the people asking for it, they have to justify to their boss who goes up the chain and whatnot. I just security, compliance and governance and making sure AI is keeping our data safe, and that's all I have to really worry about.
Olivier Lafontaine:
So that's in the control, which obviously in your position that's going to be the main thing, yet if I was to ask in your view, who in the company comes up with the use cases? Brian talked about this in the previous presentation, like use case is a key. Is it your CFO, is it your chief actuary? Is it your people in the field? Who comes up with those requests or use cases, who is sort of trying... I'm sure you have to have some people that have more interest than others.
Darwin Larrison:
I think the vendors do a lot of that and they come and they say, "We got this new thing and this is going to do this for you." And they say that to the managers that kind of own the application or the application relationship. So a lot of vendors do that. They sell that AI type product to us. But as far as-
Olivier Lafontaine:
Microsoft for example?
Darwin Larrison:
What was that? Yeah, Microsoft, but all those other ancillary, when I say 30 things, or 30 plus, maybe 40 now of the augmented, those are all different types of apps. Anyway, it mainly comes from I believe vendors contacting, "Hey, we got a hot new whatever, feature." And that's literally what they call them in the AI world, building features out of their models. And that's the one thing, and then our own IT/architecture people are kind of... it is funny, but they kind of try to help and talk and interface with the field, the salespeople. We can't just build AI bots and things like that if we don't know if they really want what the reps want. That would be like malpractice. Our people are called reps instead of agents, they're called reps. If we didn't, it'd be like malpractice. We have to have them, be, "What would be most beneficial to you?" And that's how it happens.
Olivier Lafontaine:
Obviously having a secret plan to sell you an additional tool probably as well, but that's part of how it works, right?
Darwin Larrison:
Yes, 100%.
Olivier Lafontaine:
Amanda, seeing from that, so that's more from the IT perspective. Feel you might have a business perspective to this.
Amanda Turcotte:
Yeah, well I was actually going to share an anecdote, not from our company but from the company I've worked for a hot minute prior. I was a retrenched worker with Guy Carpenter for a little bit before I found my home in Amalgamated. And I was really impressed with how Marsh McLennan was rolling out AI education in their company. So they had essentially three tiers of education available with every single employee in the company. The first is about an hour-long learning on AI. You watch a little video online, you learn about AI, what tools are available to you, mostly embedded within Microsoft, but a lot of governance and what not to do and putting sensitive data online, that type of thing. You get a little badge that you could put on the bottom of your [inaudible 00:24:28]. But then-
Olivier Lafontaine:
Sorry to interrupt, was it more like on the security side or also what you can do?
Amanda Turcotte:
What you can't do as well as security, A bit of balanced discussion for like hour video. Then employees could self-select to go to the level two learning, which gave you a little bit more information for about five hours with interactive learning and more tests and things like that to give you more insight. So those are for the people that are actually interested. Again, employees are self-selecting. So it's not like management is saying like, "Here's going to be my AI [inaudible 00:25:05]," or "for my department," or "This is going to be my AI leader." Which like we said are often the squeaky wheels, but the squeaky wheels are not always the best individual to lead and have the creative solutions on what can really drive impact in organizations.
And then there was a level three which was like, hey, I would like to get an undergraduate degree in AI or not actually, but it was a pretty intense course where you're going into more of the IT technical bits and I didn't do that. Full disclaimer, but it really gave a lot of technical background, and again, available to anyone. You didn't have to be a technologist in order to self-select into learning more about AI and how you could deploy it in the company.
And I think what that gave is opportunities for, and this is where the world is going, for every role to be a technology role, every role to be an AI role, right? And to incorporate those skills more broadly throughout the company. Another aspect of that is when you have AI that's embedded in a system, they're giving you an underwriting platform empowered with AI, they're giving you a sales platform empowered with AI, but really what AI is, it's like a widget, it's a build-it block, it's a Lego brick. We have a lot of Lego bricks within our technology tools. There are text generation widgets, there are visual display widgets, and that's how technology is built, like brick by brick. And I think that's not necessarily how everybody's brain works, is like how do I solve this puzzle by putting little different Lego bricks together?
Olivier Lafontaine:
That's how programmers do it.
Amanda Turcotte:
That's how programmers do it, right? And honestly, I feel like video games are going that way now too. If you think about Minecraft, my kids are playing, right? It's not like, hey, here's a linear story and follow this linear story and do these linear things. It's like, hey, you have this whole world and you can create something. What do you want to do? And you're going to come up with challenges in this world and use these widgets in your toolkit to attack the challenges in your own way.
So I think the more that we can give those AI self-selected users access to the widgets to solve the problems that they have in the business world, the more we're going to find those creative solutions that don't just do the things that our technology currently does faster, right? Because right now technology is really like, you have technology that does some part of the process and then we have humans that are doing a large part of the process, but that's where you have the opportunity to say, "Hey, some of this stuff that humans are doing every day right now, we can use a widget and replace it."
Olivier Lafontaine:
Yeah, it comes from the business, somebody in the business. But I love the idea of the self-selected training program at different levels. That's amazing.
Amanda Turcotte:
I really like that.
Olivier Lafontaine:
You may not know this, but if you have any visibility in terms of how successful or how popular it was, was there a lot of people that [inaudible 00:28:13]?
Amanda Turcotte:
I'm sure. I'm sure that's what MFC folks said, "Sure, taking insight's smart." I don't know, yeah.
Olivier Lafontaine:
Really interesting to find out. That's one of the things also where not everybody, if you're make it mandatory, we found out with a couple of companies, if you make it mandatory for everyone, maybe everyone will do first level, but a lot of people will just not really catch onto it or not have, maybe their brains is not wired in that way, it's not for them, they don't really see the point and stuff. But if you self-select for the next level, then you eventually identify people that can do it.
Amanda Turcotte:
Not everybody does Wordle all every week, or Times.
Olivier Lafontaine:
So if we talk a bit more about challenges, perhaps without, obviously you're going to want to be careful about your own challenges, but is there something you can talk about in terms of more difficult aspects in one year?
Amanda Turcotte:
Well, I'll talk about one in particular, that I think is really interesting and like I mentioned earlier, I think every call should be transcribed and we should keep that data in our organization. That's our IP, our organization, but there is a lot of fear around recording and transcribing our calls. We don't like it when we're on the phone with a customer service rep with a little like recording says at first, "This call may be transcribed for training purposes." I don't like hearing that and nobody likes hearing that at the beginning of a Teams call either, but especially when you have management who doesn't like hearing at the beginning of a Teams call, a lot of those recordings get shut off and then all that data is lost.
I analogize this to say the first time I got my parents a cellphone. I'm not saying that, this is not an age thing, but it's like when you get somebody a cellphone who's never used a cellphone before and doesn't see the point, right? Having a cellphone plan, there's lots of benefit to taking your cellphone with you when you leave the house, but I will not be able to get in touch with my mom for like eight hours and then she's like, "Oh, I've didn't charge my cellphone. I left it at home."
I'm like, well, you have a tool. The tool has a purpose. If you get lost, if you get stuck on the side of the road, this is an important tool to have, but it's not part of your habit. You're not comfortable with, I have to now remember this thing, I put it in my purse, back in the early 2000s it was a little heavy, this is a pain. And so there's a lot of friction that doesn't fit with our normal way of doing business that I think is going to take time to overcome. So I think that's-
Olivier Lafontaine:
I teenager don't forget their phones anywhere. They're never out of battery.
Audience:
It's such an important thing as a technology vendor when you're going through an implementation as well, if you're on a call with a client and requirements gathering, scope creep and how things can change, it really, if you have that transcription and if you have that recorded, it keeps people not trapped.
Amanda Turcotte:
Oh, definitely. No, I mean I have been the typist many times for requirements gathering. You better be fast.
Olivier Lafontaine:
That opens an interesting dilemma though. When you talk, so it's one thing to record your calls internally between management and employees of your company. What happens when the call is happening with other third parties and now how do you feel? Have you had this debate, how do I feel about that? Can you share? Can you record those calls with, let's say one of these industry analysts, they're nice people, but then they record the call on their side and you're telling, oh, your deepest secrets. Is that okay?
Darwin Larrison:
We get prompted. So I'll join a third party call and there's a prompt says you agree to be recorded. And I've said no before. It depends on what it is. And really there's zero, like if it's a negotiation, I just don't see a benefit. I don't like it personally. Now, do I think it absolutely is 100% there? Totally. These people take a call, it's great, note-taking I think or transcription is fantastic. I think that you do have to for legal reasons after 90 days, but you can have it be learned, that information learned and then dispose of it, whatever your data retention policy is. But that's, people get concerned. That's why the execs get concerned because all their chats are going to be pulled into a lawsuit. But if there's a way to have it learned, that chat, key information but not key bad information.
Yeah, I want to go to about the challenges because I do want to ask. I do find it interesting because I get feedback back. When this first started two or three years ago, it was dropped in our lap. It has turned out to be a good thing. It was a little annoying, but it was also exciting. Okay, review done. Now today we're starting to get the controls. The controls that were lacking is people are asking for third party AI usage. Okay, actuary want to use, I don't remember if it was Grok or what it was, what the model was. And we can't say yes. Even though we say don't put our private data, it's not a trust thing, you just can't, it's not appropriate. That wouldn't be professional, we wouldn't be professional because it wouldn't be respectful or right for our members, which we call our customers members.
And so you got to wait until we have a control structure for that. Those controls are now being built to where they do prompt monitoring now and we're testing that. So we could see what's being put in. And then like a DLP, data leaks protection technology or whatever you call DLP, but data leaks protection, let's go with that. It'll stop if it sees certificate numbers, which are our contract numbers, or some other piece of AI, it'll stop it from going into this public model. It's as simple as that. It's pretty conceptually simple. We're just in the middle of testing all that.
But that was the first challenge was the scariness of losing proprietary data into public AI, which people could do. Then we got the control of, we can see what people are doing now, ah-hah, even what they're typing, ah-hah. So the tools are coming about, we just aren't quite blocking yet. And then once that happens, we can start approving more stuff once we prove it is effective and works and they could go outside of our walled garden and use other AIs.
Olivier Lafontaine:
Is that a different vendor then that came in or is it part of a-
Darwin Larrison:
Same vendor.
Olivier Lafontaine:
Same vendor, but they give you tools to monitor it. That's interesting.
Darwin Larrison:
Entire ecosystem. That's a whole other seminar.
Olivier Lafontaine:
You want to share all these things.
Darwin Larrison:
Yeah, it's a whole other seminar, but yes, it is from that vendor and yeah, the whole suites there pretty much that were... And that's another interesting, y'all. There's a bunch of vendors out there and I've looked at some of them and they look fantastic, but I don't want to say, you know, gun to my head, but we are a certain type of shop. When you're in that shop and that vendor offers that product, guess what? They get first look, whether it's stated or not, because there's going to be price breaks, you're using, "Oh, what is it? Volume discount." That's what I'm up against.
So you got to prove, and it's very difficult to prove that third party is going to be better than the one that's already offered, especially if you're using 80/20. But let's say you say it's security, it's 90, 90/10. So if they do 90%, you got to go with the one that you're already in bed with. Does that make sense? That's where things are at with me anyway. It's not a bad thing. They're doing okay.
Olivier Lafontaine:
Have you been able to do in your company projects that have an actual impact? You talked a little bit about that I think with Amazon Q, so projects that are actually making an impact on the customer experience?
Darwin Larrison:
The only thing that I would say relates that I could think off the top of my head, because I see them all because they come through our process, was a sentiment analysis, which many organizations would consider high risk because sentiment is judging and having AI score, give a score to somebody's attitude on the phone. And whether it's the customer or this customer service rep. So I told you that we have a form we go through and in there was customer impact or employee impact, and both those got dinged with sentiment analysis. And on the customer side, the control plan has to stay nothing in this information that comes through with the scores only to flag the call to be listened to for training purposes. So we could say, "Hey, how could you handle this better with this person?" And it will not be used against the customer whether... No underwriting will be performed on whether they got mad during this phone call. That kind of stuff. Narrow the idea of that stuff.
And then the customer service side, this came from an HR person that was on the committee. She originally brought it up, said, "Well, that scoring could be used in the person's performance appraisal?" And I went, "Whoa." And this was in the early days of this when it was really stressful and headache-ridden meetings to figure out how far we go and how deep we go.
And she brought that up and it was like, that was such a fantastic great point. And so the control plan says this data is not going to be used to judge the person's performance or the CSRs of performance, it's purely for performance improvement or training purposes. So yes, that has an effect on the customer, but we aren't going to allow it to affect them as far as judging them negatively. It should only be positive in that our customer service reps will be able to work with them better. And eventually it'll probably be bots anyway. Just kidding but not really. It'll be agents and we all know-
Olivier Lafontaine:
We don't know what we're going to call it, but certainly there's going to be something. Maybe I wanted to ask you in particular, I know you've worked for startups in your past, you said this at the beginning. Do you see a difference in how startups are working with AI, adopting AI, thinking about it? How do you see the difference there between corporate and startup?
Amanda Turcotte:
Yeah. Well, I mean I think startups have a very different problem than an established insurance carrier. An established insurance carrier needs to grow top-line revenue by 6% this year, maybe hit a ROE of 12%. A startup that just raised their seed round has to have infinite growth this year. And that is the only KPI that matters, whether we get from seed to series one. But yeah, so anything that we could do to focus all of our energy at a startup on what is going to drive growth and revenue in this company, what's the cheapest way that we can get to that goal? Because venture dollars are very expensive. We've given away our equity in this company in order to get this cash to fund growth and revenue. So having an AI tool that can help you code faster, that can help you test better, that can democratize.
And every member on a startup team has unique information. You're there for a reason. One person is there because they're the technologist, the other one is the insurance specialist, the other one can fundraise and knows what our investors are interested in. So having tools that capture all the information that you can, have it in one wiki, and nobody's typing that wiki, that all that is auto-generating so that everybody in the company has access to that information. I think that's the top priority. Whereas once you... [inaudible 00:39:29] in the company and had this transformation, once you get to a certain size, well now we have to have controls in place, now we have existing customers, now we-
Olivier Lafontaine:
The real world kicks in.
Amanda Turcotte:
Right. Depends, right? And that's where established companies live live in the real world. It's a different real world. They're both real, but it's just different problems. But yeah, so how you interact with your customers changes, right? Yeah. So I think that really impacts the way both of those organizations approach the AI question.
Olivier Lafontaine:
So I think we'll wrap up maybe with one last question. So if you were to have a crystal ball and try to predict or what's your thought in terms of if you look forward the next couple of years, what's going to happen, at least in your environment, your company and your role, what do you think is going to be the biggest change?
Darwin Larrison:
This is dangerous for me to get this on this one because I was a lot, there was a lot of Instagram clips, and there are a lot of knowledgeable people that run AI companies and stuff. And I don't want to say it's scary. It's scary and it's awesome at the same time and I'm glad I'm alive during it. I do. I'm glad I don't have long for retirement, I got a ways for retirement. But I'm just saying that, but I also have sons and I'm telling them AI, AI, that's what I'm telling them. Information security, sure, but AI flavor information security.
Five years, I think definitely there will be job loss. I don't know if you watched Elon's latest robot video where it's now dancing like a human being where they've got the joints. Two years ago it was like this, you know. It was like automatons for Disney. Now they're dancing like humans. And the other day I got a Surface device at home, it's my personal one. And I enabled this AI thing, and it said, "Hello, Darwin," in the most pleasant voice. "What can I help you with today?" And I started chatting with this AI, wife's in the back and she goes, "Creepy." In the AI goes, "Oh." Like that right back to her. It did. It sounded like it was an attitude. It could be my perception, but it goes, "Oh." Like that when she said creepy after it got done speaking.
But the reason I tell you that story was this conversation I had was perfect. It was like perfect. I was talking about earbud tips, the little rubber things, and they're Samsung and she was just talking back to me, super pleasant, super nice. Okay, so this whole CSR thing I'm talking about, please, five years. So I think that's going to happen. I don't even know if we're ready.
And then the other one I saw was the guy who started OpenAI, and I'm forgetting his name right now. His prediction was we're not ready for what we're going to see in five years when we start seeing the bots, the robots going down the road. You could totally see personal assistant robots going to pick up people's foods and things like that. So the radical change that's happening right now with robotics, with AI, and even quantum, goodness gracious, I think it's fun and exciting to be alive and having to adapt. Thank goodness. I mean, I kind of enjoy that a little bit. It makes the career interesting. I don't know if that helped.
Olivier Lafontaine:
Yeah. Yep. Amanda, your crystal ball. What does your crystal ball say?
Amanda Turcotte:
I think I'm going to a little more optimistic. A little bit about our company, Amalgamated was founded in 1943 as a wholly owned subsidiary of one of the largest pension funds in the US, Union Pension Funds. At our core, we're a union company. We believe in the power and strength of human work throughout the industry or throughout the nation. We exist to serve working American and working Americans and their families. So we really value our human capital at Amalgamated.
But again, we are a small company, and with that comes capacity constraints on really what we can do with the limited resources we have. And I think I'm really excited about what AI can do for our company. I think it's going to allow us to be even more competitive with [inaudible 00:43:36] and all their human capital. We're going to be able to answer our customers' calls even faster and accurately every time, still with a human touch. So that when you're having those really challenging times in your life, you're not talking to a bot, you're talking to somebody on the other end of the phone with a heart, with emotions that really connects with you and cares.
And I think we're going to continue developing products. The people in our company are going to focus on developing product that fit the unique needs of our chosen target market. And yeah, so I think it's just going to allow us to really leverage our mission and serve our clients better.
Olivier Lafontaine:
Darwin and Amanda offered a grounded look at navigating AI adoption and how to balance innovation with the realities of governance, security, and limited resources. What stood out is that progress doesn't always start with flashy tools. It can sometimes be as simple as practical use cases, strong internal controls, and especially a willingness to learn whether it's improving customer service with intelligent document processing, or setting up guardrails around vendor tools.
Both Amanda and Darwin showed that change comes from thoughtful execution. Thank you for listening.
Don't miss out on powerful insights from some of the top executives in life insurance. Sign up and get notified whenever a new episode comes out.