Good morning, everyone, and welcome to LCP's Annual Reserving Seminar. I hope you're enjoying the weather we've got today. Before it gets too hot tomorrow, we'll start moaning about the weather. My name is Stuart Mitchell and I'm a partner in LCP's Insurance Consulting practice. I will be chairing today's session and I'll be joined by my colleagues Charles, Jade, Ed, Annabelle, Charlie and Phoebe. Before we start, I'd just like to guide you through what you can see on the screen in front of you. You should be able to see the slides and videos of the speakers and you can also find our BIOS and contact details. And we'd love you to follow up with any questions with any of us afterwards. On the left hand side of your screen, you should be able to see links to further content. And this includes the slides and results from our reserving round tables earlier this month, including some interesting results on the level of margin and how they've changed since last year. And there's also a link to our podcast series Insurance and Cut. And the last episode featured Maxine Goddard and it's well worth the listen. And then finally, our recent global reserving and transformation review, which links into one of the the topics later today. Most importantly, you'll find AQ and A box where you can answer questions as they occur to you during the speaking. We'll keep an eye on these and we'll address as many as we can as we go through the talks today. But we've got quite a large number attended today, so if we don't get through all the questions, we'll follow up with you afterwards. And, and finally, you can tailor the screen by dragging the whip various windows around and, and changing their sizes. The webinar is being recorded today and we'll e-mail it to you so you can rewatch and we deliver you to share it with as many of your colleagues who, who would be, who would be interested in it. Before we get going with the main content today, I'd like to plug our own conference on the 13th of May. This is one of the most engaging events that we we've ever run. You come along with the topics you'd like to discuss and set the agenda live on the day. This means that everyone gets to talk about and listen to what they're most passionate about. And if you've enjoyed the informal discussions that are around table events, our conference takes us to the next level. But how do I get there here, you ask? Well, if you haven't spotted it, on the right hand side of your screen there's a button where you can register and find out more details. And so on to the agenda for today's session, where the topic is transforming your reserving process from end to end with analytics. First, Charlene Jade will give us some thoughts and ideas on how to unlock real value from real claims data. Then Ed and Annabelle will be sharing their real world experiences of reserving transformation processes, including the bad, the good, the bad, and the ugly. And I'm hoping at least some of you are old enough to get that reference. And finally, Charlie and Phoebe will be taking us through some examples of how analytics can help solve reserving problems and provide insights to the wider business. After each talk, we'll pause to answer questions. So please keep those coming in by using the Q&A button. So on to our first topic and over to Charles and Jade. Good morning, everyone. Very excited to be talking to you about claims analytics this morning. This is something I've been wanting to talk about for a long time and it's an area where we're doing a huge amount of research and development of methodologies at the moment, some of which we're doing in collaboration with firms around the market. So what do we mean by claims analytics? Of course, when we're doing reserving, we are analysing claims data. But to slightly differentiate from that, what we mean today when we say claims analytics is specifically looking at the journey of a claim through the firm after it's been reported. Another way of looking at it would be to say that we're we're talking here about analyzing the human business of claims handling and case reserving. And I feel that this is an area that would benefit from greater exploration. So let's place that in a little bit of context and let's think about an issue that's important to all insurance firms, which is managing reserving uncertainty. It, I think it remains true that there's nothing quite like a big ugly reserve deterioration to hurt the company's share price, to put the senior management at risk, potentially even turn you into an acquisition target. And the best firms are doing a lot of work on an ongoing basis to manage and understand reserving uncertainty to try and avoid surprises. Now, a framework that we found very helpful for thinking about this is what we call the four dimensions of reserving uncertainty. So you have the one year view of risk, which is very much the real world view of reserving uncertainty. How much could our reserves deteriorate over the next financial year? There's the ultimate view of risk, which is kind of the model view, thinking about what those modeled ultimate claims might look like versus the reserves that you're holding today. There's the best estimate range because of course we know that there is no one perfect best estimate of reserving liabilities this range. And understanding how wide that range is and where you are in that range and where you want to be in that range is definitely key to managing your reserving uncertainty. And then the 4th area is the contribution of claims handling to uncertainty. And in our experience, this is the most underexplored of the four dimensions. And so that's what we're going to focus on today. So there are so many different ways to look at claims data and to analyze claims to these many different KPI's that you can come up with. This is just a smattering of examples and we group them into different types with different sort of focus areas. And, and really it depends on who's analyzing the claims and for what purpose. And there's some interesting contrast between the way that a firm would look at claims data from a claims team perspective versus let's say from an actuarial perspective. One key difference is that when we're looking at claims settlement performance, claims handling performance, we tend to be looking at a calendar period of time and all claims that were handled or settled during that period. Versus when we're looking at it from an actuarial perspective, we tend to group claims into the cohorts that they came from, in particular underwriting year or accident year, and they look at the journey of the claims from there onwards. A second key difference is that when we're analysing claims from a claims handling perspective, usually we have we have claims settlement mainly in mind. We're thinking about how well we actually settle and close down claims versus from an actuarial perspective, we're often really focused on the journey that the claims take before they come in the front door. So what is the, what are the reporting patterns? How much IBNR do we need to allow for? And then a third key difference is that day-to-day claims handlers are setting case reserves on claims and they will typically have procedures that they follow to do that and sort of a degree of conservatism or otherwise they would adopt. Some firms talk about it sort of 6 out of 10 adequacy type of philosophy. Other firms would talk about a best estimate philosophy. Still other firms would talk about a sort of worst likely case, but whatever that basis is, the job of claims handlers is to set those reserves, not necessarily to go back and analyze the adequacy of those case reserves. But that of course is the job of the actuaries when we're doing reserving and when we're thinking about pricing and capital considerations as well. Now, one of the key weaknesses that most firms experience is that analyzing case reserve adequacy tends to be a retrospective business. We look historically at how adequate case reserves have been and something that's really missing is a sort of real time perspective where we can get a sense of how strong or weak case reserves are today on the claims that we haven't settled yet. So even with the best and most experienced claims teams that we deal with, what we find when we do that retrospective analysis is that case reserving strength varies more than you might expect. So sometimes it's strong and other times it's weak. And that can lead to potentially emotive discussions about case reserve strength, especially when teams are coming under pressure for management. But when we look at a chart like this and see all this variability, that doesn't mean that the claims teams are necessarily doing a bad job. What they're trying to do is react appropriately to real world situations. And once we accept that case reserving is inherently variable, it opens up the opportunity to then take emotion out of the conversations and it gives you a chance to take a real step forward and say, actually, let's start monitoring this variability. So that's where the idea of a case reserving strength index comes in. It turns this into just a scientific trackable measure. And then it's about developing a view today in real time about how strong your case reserves are. But first, what is a case reserving strength index? So we think it should be made-up of a basket of measures which each in their own way give clues into the strength of case reserves. And we say a basket because there's no one metric that would be appropriate on its own, say take a paid to incurred ratio. If that's higher than normal, it could mean that your case reserves are weaker, but it could also just mean that you're paying your claims more quickly. So on its own, that measure can't tell you if the case reserves are stronger or weaker, but it is a very good place to start. Another measure could be the value of recently settled claims against their incurred, say for example, six months prior to settlement. So essentially, how well did we predict them six months out? And then triangle analysis using reported cohorts is also very valuable. So are they trending up or down over time? And the biggest difference with reported cohorts versus your usual accident or underwriting basis is that once claims are reported, we're just tracking the case reserving journey. So these are just some examples from the three main families of metrics that we think are appropriate. And what we find in practice is that a range of KPIs from these families, you can essentially basket them together to create this strength index. And then that index will show you how today's case reserves compare to those previously. For instance, it could tell you that your case reserves are expected to be 4% weaker than they were last quarter. And this then just creates a foundation for better discussions and more informed reserving and what makes a good index. So it should be tailored to the book and the business. It needs to be trackable and testable over time because the index itself will also need testing and refinement over time as experience emerges. It has to be objective and data-driven because that is the whole point to take a motion out of these conversations on case deserve strength and instead use some data and it should then also become embedded into processes. And we'll come back to that later when talking about how having something like this can speed up and help your processes. So we've now talked for a bit on the claims deserving strength Index. And we'd recommend that this is a really good place to start in the claims analytics journey just because of the immediate benefits. But once you have a handle on case deserving strength, then what are the next steps? So going up the levelling of path, we have claims journey analytics, and this is analyzing how claims move and develop. So when you take the claims journey from reported to close and break it into different stages and analyse each of them, so this is an example of what those stages might look like. So when the claim was first reported to, when it was assigned a standard case reserve, then when you got enough information to provide a bespoke case estimate, then negotiations on the liability and quantum legal proceedings, settlement and closure. So different firms and lines of businesses will define these stages differently, and the number of stages can vary, but what's important is that the segmentation makes sense for your data and processes. But let's zoom into one of these stages, say stage 5 S during the negotiations with claimants. And on the screen is a chart showing how many weeks a claim was in this stage. So back in 22, the claims spent an average on two weeks. Here in 2023, it was 3 weeks and now it's over 4 weeks. So that raises the question, was this a deliberate decision? And are we proactively spending more time negotiating for better outcomes or is there inefficiencies and potential leakage? So there's just so much value to be gotten from these sorts of conversations. And this is just one example of what you could be looking at. So now imagine doing the same analysis for each of the different stages, and they'll each tell you different things. They'll each give you greater insight and understanding of the claims at each stage, and that enables us to have focused discussions and highlight opportunities to improve. So one key difference between the Claims Deserving Strength Index and the Journey Analytics is that with the Claims Reserving Strength Index, we're not trying to do anything or make any changes. We simply want to observe and measure and monitor things. With the Journey Analytics, then this is the point that we might start wanting to do things differently. So it's where you'll spot a lot of opportunities. But should we be using these insights to make changes to the way we handle claims and it's not actually immediate or obvious? Yes, because making those changes then causes distortions and disrupts patterns and other development trends. So any changes need to be made in a very structured way and such that the impacts of any initiative can be monitored and tested. But now that can be done very objectively with the analytics that we've discussed so far. So having a case reserving strength index on the journey, analytics are pretty clear concepts to get your head around. But in fact, are they pointing to a greater thing? So going even further off the development path, and this is a slightly further out part of the journey, but you could introduce some free form machine learning algorithms. So something that essentially tries to develop a mathematical picture on what normal claims handling looks like. So rather than tracking fixed metrics or cohorts to instead develop a model that will look at every transaction, change in state, etcetera, and then develop a picture on what's normal. And once that is that is established, you can then flag anomalies and exceptions. So sorry, can I just check are the slides moving on because they're after freezing for me? Yeah, you're on the right slide, Jade. Perfect. Thank you. So then on this slide, we have 3 examples of what could potentially be highlighted. So say each of these charts is a KPI, the first one is an average claims cost. So and the algorithm has shown that downwards trend and has also flagged 2 outliers of very expensive claims. And the analytics here will remind you to look at what's happening with the big exceptions. The settle second one is a settlement delay and it is highlighted the point at which there was a trend change and where the delay began to reduce. So we should then look into what's changed On this date and what's caused this. So for example, was this because of a change in process or was it an external market factor? And then the third highlights a step change. So we're an average case estimate jumped from 1200 to 1600. But actually in this example, it turns out there was a very good reason for this jump. Oh, this was the date a new standard case reserve was introduced. We know about this. So this is a really good example of going back to the algorithm and saying, oh, that's OK, we know about that step change and we know what caused it. And so it's a great example then of how context matters when interpreting trends. The problem is that that context often lives in the heads of a few key people. So this is then where the concept of a claims event timeline comes in handy because in any book that's been around for a long period of time, there's a lot of things that have happened. So some external and some internal, and a few experts will know of these distortions. So, for example, they'll know that in 2022 there was a big claims backlog and they weren't able to process in the normal way. And then in 2023, a new system was implemented. So some claims were still on the old system and some were being processed on the new. And so all had very different settlement speeds. So wouldn't it be nice if that history could be taken out of people's heads and into a system where everyone could see it and use that information? And that's the concept of a claims event timeline. Essentially, it's a book of everything you need to know before interpreting any trends or features in the claims data. It provides a visible, structured record of what's happened in a book of business over time. And that shared history allows everyone to interpret trends correctly. And even better if you then link the timeline to all three stages of the levelling up path, the strength index, the journey analytics and the algorithm. So like the example where the algorithm flags a step change in the average case estimates, the system can recognize, oh these claims are a part of code XYZ and this refers to a specific event on the timeline. And if that can then be inbuilt into the process, it allows for a proper interpretation of trends and also then more efficiencies because a known exception doesn't need to be highlighted for the actuary to then excuse it and say this is OK, we know what caused it. It doesn't require further investigation. They can instead focus on what actually needs their attention and what it can also do. Where it could be very powerful is when a new initiative is implemented. It can be logged in the claims event timeline and then it could be monitored so easily because of all the analytics, the metrics, the trends and exceptions identified will be linked to it. So when you want to look at the impacts of an initiative, you can easily get the relevant information associated with it across all of the analytics that you have. So to summarize, this is a journey and it's about leveling off your analytics. And 1st, we're not changing how claims teams operate. We're just observing what's happening, treating the variability as data and tracking it over time and moving up the path. Once you deeply understand your claims journey, you can implement changes with confidence because you'll better spot opportunities, and you can also monitor the effects of those changes better. And eventually, algorithms can highlight anomalies, trends, and opportunities that you wouldn't have even thought to look for in the earlier stages. And the claims event timeline becomes your backbone. So the piece that grounds all analysis because most of the trends you spot will be grounded in things that you know. And with a structured timeline, all of those trends can be tracked back to known events. So if there's one thing to take away, it should be about starting to create this claims event timeline. So what is the payoff of all of this? Well, here's here's a good example. You'll recognize this if you're involved in reserving. So typical situation, you're doing your Q1 reserving and as part of that process you spot an interesting and difficult to interpret trend in the claims development data. We've all been in that situation where because of the time pressure on finalizing Q1 reserves, we're not able to fully bottom out the reasons for the change in development. We simply have to make the best estimate we can at the time and then log that for further investigation during Q2. And the downside of this is that when you spot something interesting in the data, it's often a case of two quarters that it will take to deal with that appropriately in the reserving. Now what if on 1st of January, when you get the claims development data, you're also getting a sort of amber light flashing on the case reserving strength index telling you that case reserves are 5% weaker than normal. You then have the opportunity to build that thinking into your reserving in Q1 and respond appropriately in the same quarter. So we're really talking about shortening that kind of learning time from two quarters to 1/4. And as most firms will know that can make a really big difference. Let's generalize for that for a moment and just think about what the the key advantages are that we've got of analyzing that human business of claims handling. I think a good point that Jade made earlier is that first of all, we're making the most of the information that we have. We're treating the natural variability as data and something to be analyzed. But also once the analytics are further developed, if they are going to flag for you things that you wouldn't otherwise have noticed. So we're unlocking deeper insights and identifying opportunities to drive change in the business. The second one is accelerating that learning journey. And if you've heard me speak at these seminars before, you'll know that I often like to say that insurance is a learning business. And if you can learn more quickly from your claims data and your experience than your competitors are doing, then you can be more nimble strategically with pricing and with market positioning. And that is something that will make a huge difference to your performance. And then finally, the very important point about making changes, making improvements to your claims handling. And there's always opportunities to improve that, but it's so important to set up a framework where when you make those changes, you're testing them, highlighting them from day one, rather than just allowing them to distort your data patterns and make it even more hard to make further inferences. So very much looking forward to talking to you more about this over the coming months. And please get in touch if you'd like to know more. Thank you. Thanks, Charl. We've got some questions coming through. The first one is do you have a recommended piece of software which is good for a claims event timeline? That's a good question. At this stage, these ideas are relatively new, and what we find is that they're at the moment they're best implemented using whatever coding solution or data handling solution you're already you already have in place. The key thing with that claims event timeline is first of all, get it up and running and make sure that everything that would be in the little Black book or in certain people's heads is out in the open. The real value then comes from starting to tag claims against items in the claims event timeline because certain items will affect claims, let's say in the settlement dimension. It could be that when we had a backlog, all claims that were outstanding and due to be settled in a particular quartile affected. Other initiatives might affect the reporting dimension. It could be something to do with our standard case reserves. It could be something to do with let's say a claim settlement portal run by by by the state. Other things could be to do with, you know, legislative change that would affect all claims that were had got past a certain point in the claims life at a certain date. So getting that dimension correctly and start starting to tag claims with all the items in the claims event timeline that they might be affected by is when you start to then get the value. OK. Next question is what are the practical challenges that firms encounter when they're trying to start the claims analytic journey? I think something that we encounter a lot is where the claims analytics team is set up with the with a very broad remit to kind of look at claims data and try and get new insights, but perhaps without enough structure in the remit. And perhaps that's just that the board has decided it's a good thing to do, but they're not quite sure what they're gonna get out of it. And then we find you can you can get a certain amount of decision paralysis because there's so many different aspects of claims that you can analyze. And that's why we're proposing the development pathway that we talked about today, something really tangible like case reserving strength gives immediate benefits, then very naturally progress into breaking the claims journey up into its component parts and analyzing those. And what we're aiming for ultimately is something that can give us a real time warning of any potential material deviations from normal claims handling behavior. So hopefully that logical structure helps to give direction to what your claims analytics teams are doing. OK. Next one, how effective is the case reserve strength index for low frequency, high severity portfolios? Low frequency, high severity. I mean, what what will commonly happen in those sorts of books is that you'll also tend to have long reporting and settlement delays. And so there's a long period of time over which you're analyzing claims. And that's an area where the case reserving strength index really shines because it makes all the difference in the world how appropriate and consistent your case reserves are. And it doesn't really matter how many or how few claims you have, but but something that we've seen for books that have sort of very high severity claims with that are potentially quite litigious, involve a lot of expert inputs, could be medical reports, etcetera. Is that there's more and more information that you can bring into the case reserving strength index that you might not otherwise be able to do. So it could be textual analysis of information in the claims file, information that was provided at the time of reporting, which might help to give a sort of hint as to this is a claim that might become really big. This also links really well to the idea of a watch list and know many firms run the watch lists for claims that are potentially large but where there's not enough information. So yeah, it's always difficult when you've got less data, but the solution tends to be to go deeper and to get as much info as you can from the case file and then try and build it into your strength index. OK. We're working you quite hard this morning. What One final one before we move on that we may come back at the end with some more questions. And who would be better suited to perform claims analytics reserving a risk team or a separate claims analytics team? That's something I've thought about a lot. And I think the fact that we're seeing distinct claims analytics teams being set up is kind of symptomatic of the fact that those other teams be at the risk team or reserving team tend to have a very full plate at the moment. So I can understand why firms would put separate dedicated resource aside to look at this. And in the early stages of building up your claims analytics capabilities and going along that pathway, I think it, it remains helpful to have a distinct team. But I can't help thinking that that's that that's not, that should not be the long term goal. Because what, what I would think we generally would want is that longer term, these processes become embedded into the work of the risk team, of the reserving team. And of course, the claims team. We've only just touched on the, the fact that, you know, there's potentially emotion in discussions about especially case reserving strength or settlement efficiency, etcetera. But actually this is a journey that where the claims team has got to be brought along and where they've got to actively participate. And by making things more objective and measurable, I think we can we can help claims teams to get benefit out of this without feeling like they're being put unnecessarily under the spotlight. So long term goal I think is hopefully to not need to have a claims analytics team, but to have all of the stuff embedded in our existing processes. OK. Thanks very much, Charles and and to Jade as well. It's time to move on to our second topic about reserving transformation. So it's over to Ed and Annabelle. Brilliant. Thank you, Stuart. So myself and Annabelle have got a three-part session planned now. We're going to start off by talking about what firms want to get out of transformation projects and specifically why there's so much interest in undertaking reserving transformation right now. Secondly, Annabelle's going to run through how you set yourself up for success from the get go with effective planning and resourcing from the start. And then I'm going to bring it back to having a transformation mindset and some of the common pitfalls we see when working with firms to give life to their transmission transformation objectives. So starting you all off from the top, why are firms doing transformation at all? And specifically, why are we doing projects now? If we look at the graph on the left hand side, this is the top 10 reasons people gave for undertaking reserving transformation projects in our 2024 Global Transformation Reserving Transformation survey, where we covered 160 insurers worldwide. I guess looking down the list of things on the left hand side, process efficiency, automation, spotting trends quickly, more efficient reporting, data quality, they're all things we probably look at and recognize for ourselves. Yeah, we ideally want a bit more of them. I guess what's really interesting though is when you try and group them into like in dark blue, the things that are more related to speed, efficiency of process and the light blue elements, which are more about the quality and insight that you get from it. And whilst both are important, I think by and large the speed and efficiency points tend to come first, albeit firms with a clever transformation program can tend to deliver both concurrently. The real question is why are firms doing this now? And we really have seen a huge interest in reserving, reserving transformation projects in the market over the past year. I think we've got more on the go currently than we've ever had at the during the time I've been at LCP and a really strong pipeline as well. There's three reasons that tends to come up. Firstly, IFRS 17 implementation for most of us now completed or nearly completed and has delivered what it needs to. So I guess that means two things, Firstly, like executive sponsor attention and transformation budgets within finance and actuarial cost centers are freeing up to be deployed to other things like reserving transformation. And secondly, IFRS 17 projects have brought together reserving and finance data flows in a way that makes them front of mind. Most of the transformation projects have tended to deliver the requirements of IFRS 17, but not necessarily in the most efficient way. And so people looking to capitalize on the momentum, keep things going and look at how to make both the actuarial and finance inputs into all downstream reporting more efficient and effective. Secondly, there's really interest in increased availability of AI and automated tools out there in the market, whether it be open source modules available in Python or AI based reserving platforms you can buy off the shelf like LCP and show site. And then finally, increasingly coming up in discussions around transformation projects is the acknowledgement that whether you're in the personal commercial space or London market, there's a a view that we are heading into potentially A protracted soft market. And firms are doing what they can to prepare for that by bringing forward investment in efficiencies across the board, not just in finance and reserving, but that's contributing to the excitement around AI tools and the ability to roll on and deliver some of the efficiencies highlighted during IFS 17. So in hindsight, it's it's never been a more exciting time to consider reserving transformation. And certainly a lot of the market is looking at it at the moment. Over to Annabelle now for some thinking about how to get the best from the best success from the the start and from the get go. Thanks, Ed, and good morning, everyone. So transformation projects are significant undertakings and they require full commitment from both the function and the wider business. They're often technically complex, exciting and they offer a whole new level of reserving. Deciding whether or not to embark on a transformation project is not a decision to take lightly. And if you do decide to proceed, how you go about it needs just as much consideration. We've seen a wide range of approaches depending on every functions, needs and overall business strategies. Whatever your approach, one of the key aspects is getting your objectives well defined early on in the process. Each reserving function will have core objectives for change and this feeds into some of the points that Ed was just making. So these may be centred, for example, around streamlining. So a more efficient roleful process, the ability to aggregate reserving classes, or having more reserving functionality on one platform. Objectives could also be centred around enhancing output, for example producing more detailed reporting dashboards or the ability to drill down to high levels of granularity when you're investigating trends. Most reserving functions know very clearly which objectives get them the most enthusiastic for transformation, and given full control over time scales and other parameters. Keeping the core objectives at the forefront is usually straightforward. However, something that we've seen is that most reserving functions need to kick off transformation when things hit a tipping point. But waiting this long can take the focus away from those core objectives. And if you do wait until you get to a tipping point, there can be some higher consequences involved. And these consequences are so big that they usually attract the attention of senior leadership, usually in a negative way. We've seen some common tipping point scenarios in the market, and these, for example, are when excessive manual processes are allowing little contingency time, risking the business not meeting hard deadlines, or a lack of process efficiency negatively impacting the strength of the peer review process, which could lead to a material error in the results. So waiting until things are critical can negatively affect the quality of the transformation solution itself. So why act early instead of being tipped into change? At the tipping point, you're more likely to go for a quick fix over a well thought out long term strategy. An early transformation avoids disruptive firefighting and it can keep the business more responsive to market shocks and change. And as I've mentioned, you can maintain more control over those bigger picture objectives and avoid a solution that's focused on fixing a specific issue that has occurred. So once you've decided to move forwards and get onto the roller coaster that is a transformation project, the next big question is how to manage resourcing it. And there are two ways to approach this fully internal or bringing in either a contractor or consultant support. So what are some of the benefits of having external support? So external parties unemotionally tied to the current process and they can therefore challenge outdated norms if transformation is urgent because you've got to that tipping point. External help can also Dr. momentum, especially if internal resource or expertise is lacking. External support can also bring a wealth of market knowledge and experience, and this can potentially lead to a more competitive full thinking solution. On the other hand, there are also benefits to a fully internal solution. For example, an internal project encourages cross functional collaboration and long term business connectivity. Transformation also offers a great opportunity to upskill your team, create ownership and boost morale. Especially if it's your team that have defined those pain points that you've got you there in the 1st place. And there are lots of factors to consider when deciding whether to opt for external support. So for example, your objectives for change in the 1st place, including how ambitious they are, your team size and structure. For example, if resource is already stretched, external support may be highly beneficial to your project and upcoming upcoming deliverables. And avoiding time scales when for example an in depth reserving exercise is upcoming. Also, bearing in mind the time scales in which you want to achieve the change, you also need to consider expertise in house, especially if you're wanting to migrate reserving platforms or the solution design is particularly complex. Finally, you also need to consider your budget for the project. So we've covered setting your objectives, maintaining control over those objectives, and resourcing the transformation project. We now wanted to highlight some of the key tips for planning the project. Although every transformation project is unique and it requires A tailored approach to management, it's also really important to stay aligned to the foundational principles of project management. With so much complexity involved, the foundational elements can unintentionally be neglected. So with that in mind, before you kick off your transformation project, build a strong and passionate transformation steering group and try to include representatives from key functions such as reserving, finance and data architecture. And this will give diverse stakeholder perspectives which should help you to secure senior leadership buy in. So really think about who needs to be around the table for those crucial discussions and decision making. Next, be clear about the areas requiring change. So focus on the biggest value add and address high impact areas first. And this process may start with a very large shopping list. So for example, is there a specific spreadsheet that is really hindering the current process? What is the reserving classes that could be aggregated? Or is there a further split in the data that could be optimized for scenario analysis? But eventually this list will develop into prioritized detailed goals for change. Next, get the timing right. So we've mentioned not waiting until you're a tipping point. But also try to work backwards from the minimal deliverables, ensuring that proper testing and contingents time is built into your project plan. Establish some pre agreed touch points for your steering committee and also define those phase outputs. For example, one phase maybe set it focusing on a subset of your reserving classes 1st and then rolling out the solution to subsequent reserving classes in subsequent phases. Finally, decide design the solution with the key users in mind. So this sounds like a really obvious point, but it can sometimes be overlooked. Really trust the people who understand the process the best, as it's their insights that could shape a more practical, business aligned solution and potentially uncover areas of the process that you may have overlooked. I'll now hand you back over to Ed, who's going to talk through how to optimise the solution once you're on the transformation roller coaster. You seem to be having a few problems with Ed rejoining. OK, sorry. I've done that classic 2020 thing of leaving myself on mute, but I'm back now, so thanks very much for that. One really key thing I think from Annabelle section is just that point around tipping points and trying really hard to get yourself going before you're tipped into a transformation where one particular objective becomes all-encompassing at the expense of others. What I'd like to jump on now and talk about is transformation mindset actually. Because even if you've given yourself the best chance of success with your project planning, your resourcing, and your objective setting, it's still really difficult to make good headway unless you've got the right transformation mindset. Now, the right mindset really varies from Organisation to Organisation. There's no one-size-fits-all, and whatever fits in best really is what fits in within your own Organisation's culture, ethos, and priorities. But what we have encountered across various projects we've been involved in is four types of behaviors that I think are are best avoided. And I'm sure as I go through these and illustrate them, you'll be able to recognize elements of themselves in in your own personality. Certainly I recognize at least one or two of them in in mine or say my team tells me. So 4 things to think about when you're going through the transformation process is whether you're a bucket Lister, someone who has a really, really long list of objectives and is committed to ticking off every single one of them. Whether you're getting stuck in an infinite design phase where you keep thinking about transformation but never actually get started. Whether you're a like for like anchor who has grandiose objectives for achieving significant changes to your process, but when push comes to shove, actually just wants to tinker around the edges. Or whether you are a tinkerer who is never constantly is constantly adjusting or changing the the goals of the project and never settling on just one consistent design phase or getting things over the line. So that's an overview of four different mindsets that we commonly encounter both internally and externally when we're working on projects. Just to go through each in a little bit more detail and give some examples of what those look like in practice and how we can overcome them. So starting with the bucket list, that's this idea that perhaps you only get one chance to transform. Your team's been talking about it for years. This is your big moment, so let's do absolutely everything. And what that leads to is inevitable overscoping of the process, having hundreds objectives, sometimes those objectives even being contradictory because you work on the assumption you should be able to have everything that you want and links to that. Because you've got this long list of objectives, you've put everything down, there can be an unwillingness to compromise on any of them. How do you avoid that or how do we help clients avoid that? Well, firstly, when we get involved, we really focus on the available time and resourcing. And actually, I can see a question that's come in just around how clients make progress against internally resource transformations alongside BAU work. And actually the focus on breaking things down into definable chunks. Working out how much time you've got between now and the next Big Crunch period and then doing something in that time period really helps to to achieve that. Links to that. We also help people with workshops to prioritize objectives and literally go through everything on an RFP or everything that the team wants to achieve. Work out what's absolutely essential and what perhaps can fit in around the absolutely essential parts. And then links to that. Of course, I guess the idea of using external support, whether it's to help you do the end to end process or just to provide a sense check at the start that you're going in the right direction, we can provide objectivity around what's important and what the wider markets doing. This next one, the infinite design phase, is the one that perhaps resonates most with me and that I find myself most easily trapped in. And part of the reason is design work is really fun. You get to create and destroy entire reserving classes, methods, projections at the swipe of a mouse across the screen. It's intellectually stimulating and it's exciting, but it's very easy to get trapped into either aiming for perfection or aiming to eliminate all uncertainty from the process. And aiming for perfection we've kind of touched on previously. Avoiding uncertainty is something that budget holders, executive sponsors and procurement people really like, but it's just not possible as part of a reserving process. And if you structure it to try and avoid all uncertainty, you won't take enough risk and you won't explore enough alternatives to perhaps find the optimal solution to a problem. So how do you break out of that infinite design phase? Well, in a way, the number one thing is just to get on and build something. Or to articulate it more clearly. Commit to a proof of concept that's relatively simple, either for everything or for one specific part that you can deliver at a set point in time. And then linked to that, reframe those uncertainty and perfection traps as an opportunity for continuous improvement. And accept that the first thing you build, because it's labeled a proof of concept, is not going to be the final thing, But you can reflect on it at the end and properly assess how fully it meets your needs and how much more sophistication and iteration is required to get it over the line. But ultimately the mentality shift away from design to build comes from committing to building one thing and then getting on with it. And what we see is very quickly once firms commit to that, that step, they tend not to fall back into the design phase. Moving on to the third one, which is the like for like Anchor. This is something we commonly encounter in teams where senior leadership team has set the objectives, but there perhaps isn't buying or acknowledgement of those objectives from the wider team and in particular those at the coalface doing the day-to-day work. So examples of of this in practice are like particularly junior members of the team who have really detailed experience the process, focusing much more on optimizing the current approach, perhaps because they don't see the bigger picture or don't understand how their specific area of expertise, focus or responsibility fits in with it. And a hesitancy to try out alternatives as part of a proof of concept process. Ways to avoid that. The most important and obvious one is to engage the whole team in objective setting and solution design. Whilst it is really important to keep a clear focus on high level objectives, and we'll talk more about that in a second, at the same time it's important to make sure you've absorbed information from more junior team members about the detail behind why the process is the way it is now. Because ultimately, behind each incredibly unwieldy, complicated formula in Excel or spreadsheet, there is often a genuinely heartfelt reason for why it's done that way that needs to be explored and understood. And secondly, linked to that, focusing on trials and tests and providing psychological safety to revert changes if needed. It's much easier to get buy in for people to help build something as part of a test or something that can be undone or iterated later than to force people to buy into the concept of moving away from what they're familiar with. And providing that clarity and distinction really helps get the team all pulling in the same direction and aligned on the next steps of a development Sprint. Finally moving on to the last of my 4 areas, which is the tinkerer. And this is the idea of can't we just also do this? And I certainly recognize this in myself as well. We get 3/4 of the way of through building something. And you have a really good idea. And you think this spreadsheet can do XY and Z? Why can't it just do X + a little bit more, or Y + a little bit more? Or actually, now I've seen it, there's perhaps a better way of laying this interface out. All those considerations are important, but there is a time and a place for them and regular scope changes mid build. Multiple iterations of something for very marginal additional improvements can cause frustration. They can waste precious budget that might be better spent addressing bigger priorities, and they can lead to over engineered, inflexible solutions. Can't adapt to the real world of how reserving actually works, where often you're ingesting additional qualitative information and needing to reflect it at very short notice. Some simple ways to avoid becoming the tinkerer are really good project management discipline, having cleared build phases, separated by periods of reflection and review and linked to that having clear acceptance testing and sign off. It sounds really mundane, but it is really important to agree on what the definition of done is and what the definition of success is. Because that is the easiest way to be able to draw a line in the sand and say we have met the criteria, let's go and build something else rather than falling into that perfection trap, continuing to re engineer the same solution. Before we wrap up. I just want to spend a little bit of time focusing on objectives. Because if there is one golden thread that links all of those mindsets and ways of avoiding those mindsets and a lot of the stuff that Annabelle focused on around giving yourself the best chance of success, it's setting and remembering better objectives as you go through the reserving process. And the way we help clients set and understand their objectives is through a four quadrant process where we sit down and we almost do what we do, do anonymous polling around whether for each stage of your process, so like data ingestion, A versus C, roll forward, gross reserving, net down, etcetera, etcetera. We go through a process of whether they want to focus on robustness, efficiency, sophistication or value add. And we ask all members of the team or certainly like managers upwards to chief actuaries to individually and anonymously score where they'd spend like 100 point budget on these different elements. And that provides a really useful conversation start over some of the different trade-offs. And even without knowing who gave different answers, you can kind of say, well, most people have scored sophistication very highly, but I can see one or two people have given a really high score to robustness. Let's think about this from the perspectives of what those people might be thinking. Do we actually need to focus on making spreadsheets more effective and less prone to error as part of the process? And we find this could be really helpful to avoid falling into a lot of those mental pitfalls that I've talked about that we can probably recognize in ourselves. And also giving better clarity when you're in the midst of build work and need to make on the fly design decisions. Having a clear articulation of your objective for that phase of the process can help you steer in, in terms of RI should do it this way because this is going to be the more efficient of the two. And that's our focus for this area or here what we're doing is value add. And I can see this option is going to add slightly more value than the others. And that ultimately helps avoid lots of different iterations of the process. So bring it all together. How can you take this talk back to your business and help make your next transformation project smoother? As I said, the golden thread that links everything is that clear objective setting using that that framework we've set out, if it's helpful. Secondly, that key point around planning early and avoiding being tipped into a transformation focused solely on one objective rather than your wider priorities. Linked to that, consider your resourcing needs. Make sure that your scope and your resourcing are aligned and that you've got contingency to overrun into busy periods if needed. From my section, it's being wary of those mindset traps where even if you set all the preconditions for success, it's very easy to get stuck in the design phase. Enter the transformation process with too long a list of objectives to be realistic to deliver, or get stuck into endless tinkering with one phase at the expense of delivering later stages of the process. An important carve out from that is how to balance bottom up and top down priorities such that at the end of the process and the design phase you've got good buy in from the whole team. And lastly, before we hand back over to Stuart to leave you with one thing, perhaps the most important thing of all is to calibrate your expectations. Don't expect perfection on the first attempt, and don't expect to have certainty around what you'll deliver and when, because transformation is ultimately a journey and a process of continuous refinement rather than a clearly defined pathway that goes from A to B. Back to Stuart for questions. Thanks very much. Ed. I don't know who will answer the questions you were on about. I'll pose them anyway. So the first one is from the How long is a piece of string variety and is how long should a reserving transformation take and how many dedicated people. I can take this question Ed if you want. So I guess that is a how long is the piece of strings question. It very much depends on resourcing. So for example, if you have a dedicated resource, internal and also external, you could do a Sprint if you need to. It also depends on, for example, how frequent you've got your reporting deadlines. And if you've got lots of BAU tasks and very little resource, then you may need to stretch the transformation project, for example over 18 months and do mini, mini phases to make it more manageable. But equally you may actually see a more shorter term. So for example, if you're migrating reserving, transmit reserving platforms, you might be able to migrate that in a couple of months if you have the right resource. So it's very much a how long is a piece of string? But it very much depends on what your objectives are and what your resourcing is. Thanks, Annabelle. The next one is how are different functions, for example actuarial, data, finance, involved in different points of the reserve transformation journey? I can take this one as well. So you do need to align all functions when you're doing a reserved transformation project because the more functions that are aware of it and are involved, then the smoother the process and the easier the the messaging becomes. But some functions will be more involved in different phases than others. So for example, in the initial phase, so that kind of shopping list phase where you're coming up with your objectives, it might be more focused on actuarial and finance. And then when you come to data ingestion, there might be a heavy focus on data quality and structures. So involving IT and data at that point would be high priority. And then in the implementation stage, you might then see a swing back to, for example, actuarial and finance while you're actually producing the the design decisions and build. And then also obviously tying in finance when you get to your downstream reporting and risk and other functions. So in summary, you need to involve every function, but more or less at a different phase depending on where you're at with your project. OK. And one final one before we move on to the last topic. And what training or upscaling will be needed for new tools or methodologies? I can take that one. I think it really depends on what you implement as part of your end to end reserving process. And we're seeing so much more interest across the market in deploying AI alongside the traditional reserving methods as kind of like a copilot and that can be through insure site or other custom tools and platforms. I think what this leans into is a new way of approaching actuarial work where instead of traditionally working through bottom up, one method after another, each being reviewed from the starting point by a human reviewer, training ourselves to work from the top down and perhaps using AI to generate DFMS, BFS, IES, Cape Cods, etcetera for us. And then adopting a more kind of like almost with the analyst as the start, the peer reviewer as a starting point, using diagnostics, quantitative metrics for goodness of fit, etcetera, to assess which method is the best fit to the data. And then to use judgment to build out from there into what's not in the data, but still important to the business. So the real answer is everyone's on a journey of moving from that that bottom up process where you just consider one DFM to the top down where you consider more methods in the human mind can comprehend and pick the best. And the real training that's needed is to change the mindset to move out of that traditional workflow approach and to embrace a more diagnostic driven top down approach. It takes training to understand what the diagnostics are, what they mean, how to interpret them quickly, and how to quickly narrow down an almost mind numbing number of options to just a few candidate methods give you the best starting point, and then to take an IAI method and overlay judgment on top of it. Thanks very much to Ed and Annabelle. And now we're going to move on to our final topic. And Charlie and Phoebe will show us how to use analytics to solve reserving problems. Over to you, Charlie. Hello everyone. And so we're going to look at how analytics can help with these three questions. I'll be talking through the 1st 2:00 and then Phoebe will take us through the third one. And why is it these free? Well, it's because people have told us these are the ones that they are interested in. And what we're about to show is all work we do within LCP and we've been making available in our reserving and analytics platform LCP and Shoresight too. So taking that first question, can we predict reserves deteriorations? Well, this is something we talked about last year's seminar. Why are we talking about it? Again, we had lots of interests. We had follow up conversations of institutions in the UKUS and Europe and our free key follow on analysis we wanted to share. So quick recap from where we left off last year. So we showed how various reserve deterioration risk indicators could be used to predict future reserve deteriorations. So those included things like trends that incurred to ultimate change that incurred development, what recent deteriorations have been. And then we trained the machine learning classification models, take those indicators and predict if there was a reserve to ratio. And we assessed how well this worked on any IC data, which is the whole of the US insurance market by class and company. So the results from last year as I was looking at predicting a reserve increase of more than 5% and looking over a time period from 2019 to 2022, the deteriorations over that time period. And you can see the the model performance there. So predicting correctly quite a live percentage of deteriorations and also non deteriorations and the area under the RC curve there as well, which is a commonly used measure of machine learning model classification performance. And 80% is pretty good, hundreds as good as you can get. 50% is if you're just flipping a coin as to whether that's deterioration or not. So on to the new stuff. So the first question we've answered is does performance change if you try to predict deteriorations that are bigger than 5%? So let's take a look at that. So there you go, the pink column headers. This is doing the same thing as last year, but actually we're looking at predicting deteriorations of 10% or more, 15% or more. And the performance is, is pretty similar to what we've got when we're looking at predicting the smaller deteriorations. The second following analysis is looking at how that was more performance change if we look at a different time period. So we looked at 2019 to 2022 and we've gone away and purchased some more data, the 2001 and 2011 year end and AIC data. And we can see that the performance this using the 5% cut off, how the performance changed there. It's dropped off a little bit but not dramatically. So the AUC is a little bit lower and we don't fully understand the reasons for why that's different at the moment. That's something we're actively investigating. But performances is broadly comparable even looking at those those older time periods. The third question we're asked is how do you ensure that the reason for the predictions particular reserving class are well understood. So if this model is flagging up with a particular reserving class at this quarter end is actually quite a high chance of having a deterioration in the future and it's above a kind of deterioration risk tolerance that you've set. How do you understand why it's telling you that and whether it's something you care about or not? So we've introduced shop scores to understand the prediction for specific reserving class. And this is a general technique understanding individual predictions from machine learning model. And it's been widely used in machine learning community for quite a long time and is referenced within actuarial standards and guidance now too. And what this allows you to do is you can effectively kind of click on a reserving class that's being flagged up as a having a high chance of deterioration and it will show you those reserve deterioration risk indicators that are driving that prediction. So we can see this example here as increasing trend and pay to ultimate that that's the key driver for why there's a deterioration predicted in this class. And the the second most important one is, is the pay to incurred. It will also show you which indicators are actually suggesting something slightly different. So you can kind of get a slightly more nuanced view of what's going on as well. In a similar way to if you ask an actually to review your reserves, they might say, well, I'm a little bit concerned about these things, but actually if I look at these other diagnostics, then perhaps there's less, less reason for concern. So it kind of gives you a more balanced view, but tells you kind of what the key things are to look at. So overall, this means you're not overwhelmed by information. So you we can use the predictions to pick out the reserving classes to focus on they're at risk. And now we can use shop scores to understand the specific diagnostics reach reserving class that we need to as well. You might have a really wide range of diagnostics that you're feeding in to predict potential deterioration, but you want to make sure that out of those, say 20-30 diagnostics, you go straight to the top three, say they're indicating that there's a potential issue. So moving on to the second question, how do we open up reserving insights to the wider business? Now, there's lots of valuable insights that are within reserving teams already, but they don't always make it out to everyone in the wider business. And the second thing that I'm particularly interested in is I think there are also lots of other insights that are waiting to be unearthed and that the wider business would benefit from and which the reserving viewpoint and toolkit can help detect. However, they're not discovered currently because reserving isn't looking at that level of granularity or it's not material from a total reserves perspective. That is very material from a business performance perspective, for example, looking at more detailed trends in motor damage claims. So how do we do this? Well, it's a simple answer really. We just let the business ask the questions and show them the insights which answered their questions. And we have the technology now to do this without this becoming an impractical workload for the reserving team. So let's go through an example. Where is the deteriorating experience on lines exposed to bodily injury claims? So how do we convert this into reserving language so we can use the reserving toolkit to answer it? Well, it's a language problem, right? So we use a large language model. So take the first part of the question, where is that deteriorating experience? We can map that into more specific questions that reserving teams think about. Like when I use LLMS to help me with my work, the response actually helps me clarify what it is that I actually want an answer to. So you can see in this example a deteriorating experience is slightly sort of vague, vague thing ask someone to to answer. And here it's actually come back and say, well, what do you mean? Do you mean profitability, decreasing claims frequency increasing claims being settled more slowly? And then you can choose which which thing you actually specifically want an answer to. Each question then maps across to specific diagnostics and trends that have been identified. And then you can see clear charts with those trends highlighted in response to the question that you've asked. So we've got part of the way there, but we haven't covered the second part of the question, lines exposed to bodily injury claims. This one in a way is more straightforward. Companies will often have a kind of data dictionary data structure which you can't look at and then work out what you need to filter on to see what you're interested in relation to that that point. But people don't want to have to consult a data dictionary every time they've got a question and work out what fields they need to filter on, what SQL query they need to write, what filters in Power BI they need to, they need to click on, they just want to get the answer. And again, large language models can help us. They can work out what data level to filter on and what segments without needing to go in and understand that data structure. So if we take this example here, we've got 2 two days levels, the first one class of business and LMS can do a pretty good job of mapping for something like bodily injury claims to use particular class of business. So you can see here highlighted liability occurrence, liability claims made and bad mouth claims made. The bit that doesn't involved AI at all is then on having that understanding of the data structure, the metadata around the data structure they can then use to pick out particular distribution channels, that's that sell those particular class of business. So you can then pick out, OK, we sell those through MGAABC open market in France wholesale. So combining those two things together, we can then take the lines exposed to body injury claims and where we've got the trends coming through in those diagnostics and response to say where's possibility decreasing. And we can pick out the specific segments of our portfolio in response to the question that's been asked by the business. And this is now something that could just be done in seconds, can be just self served by people asking these questions themselves rather than waiting days to get a response from, say, the reserving team or an analytics team and massively speeds up that feedback loop for the business getting answered these questions. So this is all a bit hypothetical, sort of looks nice going through a kind of an example like this. And but it isn't just hypothetical. It's something we've built into ensure sites. And here's what it's what it looks like. So it's called ask identify, and we want to change the name depending on what people think. So be keen to hear people's views on that. And and you can see here, you can just type in a question. You've got got the same example we went through just now it comes back and says, what are you particularly interested in shows you the diagnostics and and trends that is filtering on. So you kind of understand what it is that it's actually showing you and how it's interpreted your questions. It's not just the black box and, and you can see here in the in the orange within that pink box. It's also highlighted that burning cost metrics could be helpful to include to answer this question. They're not currently available in this data set and you need exposure to date data to do that, which isn't currently in the data set. And getting that understanding of like particular diagnostics that are coming up a lot that could help answer questions that business are asking is really useful because you don't got a record of which data items without most value to answer questions the business has. So you can go from a sort of general sense of how we really need to improve our data to understanding the specific data improvements that most important and then really focus on on chipping away at those and having focused data quality initiatives on the things are actually going to make a big difference. So you can continue to improve over time rather than leaving sort of like a Big Bang data quality exercise that might never actually really happen. In the orange box at the top, we can see the match to the the filters on the line of businesses they're exposed to bodily injury claims. So that's covering that second part of the question. And it's also it's got full knowledge of the data structure, so it can do that and it's only using metadata to do that, none of the actual kind of actual triangle data or anything needed to make this work. So we're really excited about this and coming into insure site. And I think this is just the start of what we can do now by integrating large language models with both actuarial insurance knowledge and traditional machine learning models like we've got here with the automated trend identification. So we can really tackle insurance specific problems and do things that we like to have done for a while, but can actually really do now. I'll now hand over to Phoebe to talk about the optimal reserving segmentation. Thanks, Charlie. So the third question that we're looking at is how do we find an optimal reserving segmentation to improve our reserving. I will start by asking you to reflect on where you think you're reserving segmentation is on the scale from too few reserving classes to too many. Maybe you think you've got it spot on. If so, how do you know that with too few reserving classes, there is a risk of under reserving if there's unmonitored mix change between segments with different claims characteristics within the same reserving class? Looking at premium mix change could have flagged this much sooner at the end of the underwriting year, however instead it's not identified until two years later when claims experience comes in and the reserving team is investigating adverse A versus A. Furthermore, allocation to a business reporting level can be inaccurate if claims characteristics differ within the class. To address these downsides, we can have more reserving classes. It is not uncommon for the reserving teams to have over 300 different different classes. However, too many reserving classes brings its own problems. It's a time consuming process to reserve these classes and it can be harder to spot big picture trends because you're lost in the detail. We have taken a data-driven approach here to give an objective view on the balance between granularity and simplicity for your portfolio. The three key steps we have taken are to select data diagnostics which capture the key claims characteristics that you are interested in for your portfolio. We then have used the clustering algorithm to identify homogeneous groups which have similar characteristics and finally we have compared this to our current reserving segmentation to see where it is possible to prune things back or where to divide reserving classes up further. The first step is to choose data diagnostics. We have focused on purely data based diagnostics here so that no actuarial judgment is needed. This means we do not need to do reserving at a granular level to get diagnostics to allow us to compare between segments. We have also focused primarily on triangle diagnostics. This is because they are not distorted by different premium volumes across different accident or underwriting periods for different business segments. We can capture where business segments have similar trends and experience and there is the flexibility to look at these diagnostics by reporting year or underwriting year depending on the metric. The three key areas we can assess using these triangle diagnostics are profitability, claims handling and development. For example, we can look incurred as a percentage of development year 3 incurred as shown on the chart on the slide. We can also consider non triangle diagnostics such as rate change. In summary, it is there is a flexibility to consider diagnostics based on the data you have and the characteristics you care about for your portfolio. We will now look at some examples of our clustering approach. All of the examples are from real but anonymized data sets. In our first example, we have 30 different business segments and have considered 4 diagnostics. Each blue line in the chart represents a different business segment. We have taken the leading diagonal values from our triangles and plotted these by cohort. The first question is whether we can keep things simple and just pick out groups by eye. This is possible for a few obvious groupings, but it's hard, subjective, and time consuming to do across multiple different diagnostics. Instead, clustering algorithms can do this grouping for us in seconds. We have used hierarchical clustering here and one of the nice things about this approach is you can easily review and adjust the number of clusters. In the following slides we have used colors to show the different clusters. Apologies this might be a bit challenging to see if you're Colour blind, but when we bring this to insure sites we will have interactivity on the site to make this easier to review. The first example here is when we have divided our data into 3 homogeneous groups. We can see how easy it is to review what the clustering algorithm has done from the charts. This isn't a black box we have to trust. We have two business segments in green which have clearly different incurred loss ratio. We also have sensible groupings for light blue and pink segments. However, further subdivision is needed. This is the same data but divided into six groups. We have made the lines thicker for clusters with a smaller number of segments so they are easier to see. On the slide we can see that we have picked out further outlier segments in orange and grey. We can also divide this further into 9 homogeneous groups, which looks like a good level of division for this example. The next example is for a different class of business. Considering the same diagnostics and again considering 30 business segments. We have divided this into five groups. We can see that this works well. This approach that we have considered works well even with outliers. There is an orange segment on the bottom right chart with a much higher average incurred to the other segments and this has been successfully separated into its own cluster. Our final example is from a noisy data set. This is the sort of thing you might see in a Lloyd's syndicate. While this takes more time to review, we can see that it has sensibly split our business segments into seven groups. I will now discuss how to apply this output. The first problem might be the case of too many reserving classes where we want to simplify your segmentation. Here we in this example, we are looking at a part of the portfolio of nine business segments. The existing reserving segmentation is quite granular with 7 reserving classes for these 9 segments. The clustering approach that I've just discussed has identified that those 7 reserving classes can be simplified into 3 homogeneous groups. We can therefore confidently allocate to a business segment level from these groups because we have verified homogeneity within these groups. We have therefore achieved our goal of a simple reserving exercise with accurate granular results for business needs. Our second situation is a case of too few reserving classes. Here we have two approaches. We can either spit the reserving class into homogeneous groups from our clustering, or we can choose to keep our reserving segmentation as it is, but monitor for material mixed changes which we might want to address. Here we have a chart showing 15 different business segments, and they're mixed changes over time. We can see that the chart is noisy and unclear and difficult to see any mixed changes. Even where we can see there has been a material mix change, we don't know if this matters. It might be offset by decreases in mix in other segments with similar characteristics. Instead. On the right this is the chart monitoring mix change with clustering. We have 3 homogeneous groups and can see if there has been mixed changes. We also know that if there has been a mixed change that it matters. I will now hand back to Charlie to discuss key takeaways. Excellent. Thank you, Phoebe. And so we looked at how reserving analytics can help solve freaky reserving problems. And just a quick recap of some of the things we've covered as part of that. So we've seen how machine learning models can predict high percentage reserve deteriorations and how it can also using shop scores, prioritize as specific diagnostics to review as well as the specific reserving classes to look up. We then looked at how technology like large language models can allow business just to ask the questions. So we move from kind of reporting dashboards to something that's a lot more interactive and can just give people the answers they want rather than just giving the ability to to look a lot of different things. But I didn't know how to how to find them. And then the the final topic, looking at clustering, we've seen how that gives an objective view on the optimal reserving segmentation, allows you to potentially simplify the segmentation, allocates a reporting level with confidence and also identify early where subdivision is needed due to premium mix change. So you're avoiding that issue, which can be very annoying when you're investigating a verse A versus Z And actually it's due to mix change, which you could have spotted if you look to the premium mix change in the right way and say two or three years ago. And what Phoebe's talked about really helps avoid that situation. So thank you very much and I'll hand back to Stuart. Thanks so much, Charlie. We've got some few questions. I'll, I'll pick the first one up first if I may. And it's quite a generic question. So the question my question is a general one in response to the session overall, what is the most important question you think any DS should be asking, and how can they best challenge the reserving team? So I'm gonna start as all good politicians do, I'm gonna side swerve that question and plug that We have round tables for Neds which we run regularly. And we're going to make that question number one on the agenda next time we have an Ned roundtable. So if any of you have any DS who would like to come to those reserving roundtables, then please get in touch with us and we'll add you to the mailing list. They tend to run virtually most of our roundtables. We have an, you know, in person version and a virtual one, but because beds tend not to be in the city that often, we do run those just virtually. So that that's a plug for that. So you'll have to wait a bit longer for that for the answer to that question. So back to Charlie and Phoebe's talk. So, Charlie, the first question is the three analytic areas you've talked about, do they apply equally well to commercial and personal lines or do each of them lend themselves to one more than the other? Yes, I think they apply equally well to both. And I think that's because the common sort of problem that we're trying to address here is the potential conflict between companies wanting to make better use of the data that they have available and the amount of time it takes to do that alongside all of the other kind of normal reserving work that you already have to do. And all three of the things we talked about we see now. Analytics can help. I mean that making better use of that data, looking at things more granular levels, spotting where there are issues of relative, it's actually where there aren't as well. Using the deterioration prediction allows you to use your time a lot more effectively and also analyze granular data and much more efficiently than you then you can you could without analytics techniques like the ones we've talked about. I think the way that people are looking at making better use of data does vary between personal and commercial lines. So for a motor insurer, for example, they might be particularly interested in understanding the particular claim size bands they should use a bodily injury claims using the clustering approach that Phoebe talked through, for example, or looking at trends and experience using the ask, identifying the larger language models I talked about across different injury types of sort of question the claims teams would be really interested in understanding like what's happening on whiplash in the southwest of England, for example. And the commercial insurers think it's slightly different. It's kind of the problem now often is you've just got so many different lines of business. And using the segmentation that Phoebe talked about that clustering work can really help to understand where you can actually simplify things compared to, yeah, there's sort of actually 2300 reserving classes you have at the moment. And the deterioration prediction becomes really important then as well, because that's chief actor head of reserving. If you do have a very large number of reserving classes, even after doing the clustering. And how do you know that you're you're picking up on the things that are important where there are warning signs, deteriorations. And before you're getting asked questions by, by regulators say on, on, yeah, why, why is this trend coming through? It looks like you're autumn. It's too late. You kind of got confidence. You've already spotted anything that might be an issue. OK. Next one, how do you deal with segments that are too volatile to assess whether they're whether they are similar to other segments? I can take this one. So this is this talk was part of some wider work we've been doing on optimizing segmentations. And as part of this analysis, we've calculated A volatility score for each business segment. We then grouped segments which were too volatile and then performed clustering based on these groups which was sufficiently stable to look at our clustering as we described above. OK, thanks Phoebe. And and then just one last question and how do you deal with diagnostics on very different scales when applying the clustering algorithm? So loss ratios, you know typically up to 100%. Also an average claim sizes can be, you know, many thousands. Yeah. So in our clustering approach, this works by looking at the differences between different diagnostics for each business segment. So we do need to make sure that we're on a comparable scale because of ways differences of a thousands for our average incurred will dominate the clustering compared to say an incurred loss ratio. So to do this, we looked at the percentage difference of our average incurred from the mean for each cohort. And this allows us to get a comparable scale to our other metrics such as our incurred loss ratio. OK, thanks very much. I think in the interest of time, I'll call it there then. So it's time to draw the, the webinar to a close. I hope you've enjoyed this, this session. I, I finally certainly found the sessions very interesting and thought provoking. And shortly after the webinar ends, the screen will prompt you to complete a feedback survey. And we'd love you to do that as it helps us kind of formulate the agenda for the the next sessions. And we've got a good track record of responding to requests for topics. So if there's anything you want to us to cover, please provide that in the feedback. And then finally, a couple of points. Remember, we have our own conference event on the 13th of May and you've got a link to book to that. And I'm pleased to report people have already been registered for that event. That's great. And then just following up on the point towards the end there, we do run round tables for Ned. So if any of your Neds are interested in joining that, please get in touch with us and we'll add you to the mailing list for that. So we'll finish a couple of minutes early. And with that, I'll say thank you and goodbye.

Please register to watch our webinar on-demand. In this webinar we discuss:

  • Reserving transformation: achieving new heights whilst avoiding common pitfalls
  • Claims: a case study of using analytics to unlock real value from your claims data
  • How analytics is helping to predict reserving deteriorations, determine optimal reserving segmentations and open up reserving insights to the wider business.