Hello everyone. I'm hope and we'll be helping moderate today's webinar. I would like to welcome everyone to our Radionucleotic Impurity Analysis with Apex Guard Software Version 1.2 webinar. Before we get started today, I'd like to go over some housekeeping. At the top left of your screen, you will see our Q&A chat box. Please ask questions throughout the presentation and we will answer as many as possible within our time constraints. Below that is a resource list with more documentation on what we will discuss today. If you don't want to miss any future webinars, sign up for our mailing list at the bottom of your screen. All of your windows are adjustable, so feel free to move them around for your best viewing. And if we cover something you really like, go ahead and give us a thumbs up At the bottom on your at the bottom of your screen. If at some point you're having trouble viewing this webinar, please refresh your browser. This webinar will be recorded and available at miriam.com/webinars for you to view again or share with any of your colleagues that missed it here. You can also see all of our past webinars. This link is in our resource section. Today's webinar will be presented by Kara Phillips and Henrik Perchon. Kara is product line manager of Global Services and Henrik is principal research scientists. They have both been with Marian since 2008. I'd like to thank you for joining us. And I will now turn the presentation over to Kara. Hello everyone, happy to have you all joining us today. So over the next hour or so, we'd like to cover a number of topics. First, I'd like to give an overview of what Apex Guard is. Apex Guard we released in 2021 and we're happy to have a pretty large update to present today. That update is really focused on the discussion of writing new clinic impurity. So we'll give a brief overview of of what that means to us, what that means to you, and then I would love to have Henrik go into a deeper dive on how we chose to solve that particular challenge in Apex Card version 1.2. We'll wrap up the hour with some overview of some other features. And throughout the session, please make sure to continue to check out the resources in the panel as we'll give a brief overview on some topics. But there's more information for a deeper dive for you to review. So let's get started at a very high level. What is Apex gamma and Apex Guard? Well, Apex Gamma and Apex guard are features that are using the Genie spectroscopy suite and at a much more fundamental level, their role here is to acquire spectrum primarily from HPGE germanium detectors. They use a links MCA that is controlled by our Genie software. Now I'll go into a slightly more on what Genie software is in the next slide. Apex Gamma then is an application that sits on top of our Genius software and it has a lot of different functionality which we'll review including streamline, sample definition and sample results management. Apex Guard then is really a feature that is enabled within the Apex Gamma application to help facilitate specific functionality required for the radio pharma customers. Those include things like increased data integrity for user management as well as stronger change control functionality that you'll see reflected into the audit log. The Genie software itself is is the fundamental underlying acquisition and analysis functionality in our system. It is managed through a virtual data manager and can not only collect Spectra from a germanium detector, but also sodium iodide detectors and other scintillators. Multiple different instruments can be connected and operated by the one Genie application. I like to call Genie the toolbox of functionality because it does have a large range of applications and functionalities that it supports. And then when we couple that with an application like Apex Gamma or Apex Guard, we can expose certain functionality to help give you a more streamlined and functional workflow. What does Apex Gamma do? Well, it does a lot and, and I used the word workflow a couple times. So let me double down on that. As we work through the different functionalities at the top of the screen. And this is one of the screens that you'll see in Apex Gamma. A user May 1st create different sample procedures where the sample procedure is going to control the counting parameters that are required analysis, reporting and approval requirements of that report. Once those samples are procedures are defined, specific samples are associated with each given procedure, and these samples may be imported from a library, information set, information management system, or manually defined as needed. And these samples then get queued up on my left panel in my counting interface. Here. That means that once we come to my sample counting interface, all the operator has to do is take a sample that's been defined and drag and drop it into one of the available detectors where we'll go ahead and turn green as it starts the predetermined count time finishes and runs through the associated analysis and then generates the report. Now if you switch your hat and operate as data reviewer, we come to the last step of this workflow and the data reviews section is identified through this menu option where a reviewer can come in, review the spectrum that has been acquired, toggle it to the report, and then finally go ahead and add their approval. And this keeps a a streamlined and and efficient way to manage a larger influx of samples all while using the Genie analysis engines under the hub. One of the most important functionalities of Apex Gamma and Apex Guard system is the ability to ensure that the system is operating in a compliant way. And one of those ways is to make sure that all the system quality control checks have been executed and are up to date. And that's what this screen here reflects. Here I have different calibration check, background check, and system background check definitions and each of these configure what the response is if either the system is failed that particular check or if the the system has not performed it on the procedurally defined frequency. So for instance, you'll want to perform a calibration check about once a day. If you do not perform it, the system can either be locked out, it can have a warning executed, or perhaps just logged to a file. Each of these settings can be configured to ensure that you have the best control over your system and you're ensuring that you do are following your your procedural compliance requirements. A second very strong functionality of the Apex application is the granular permission groups. So here I'm in our setup functionality and I'm looking at the different user groups that have been defined. In this case I have administrators, managers, and technicians to find and then each of the specific users are assigned to one of those groups. You see over here on the right hand side I have a number of very specific functionalities that I can enable or disable for a particular permission group. This means I can have some permission group that is able to count samples but not review samples. I have a permission group that is able to review but not count, and I can also have any setup functionality restricted to simply administrators such that I ensured that only users who have the proper training and permission level can make any modifications to my procedures or other accounting operations. With Apex card specifically, the user management is controlled by Windows credentials, so all the password reset rates and and complexity rules can be managed directly by your IT department. Then it does also have automatic log off setting so that if a user walks away from their computer, the system will automatically log off without in case that another user comes on and and forgets to change to their specific log on. Let me keep going to the the back end of the of the workflow process, the data review we analysis and approval. So on this screen we've toggled the spectrum review to show the report view on the report. We will all go into a lot more trace of or a lot more in detail unless some of the upcoming discussions. But the Apex Gamma application does work on a a sequel server database. And So what that means is I have a number of search filters that I can quickly use to be able to 0 in and and find exactly the sample that I am most interested in, whether it's the one that's pending review, the one that has been counted on a certain detector or associated with a particular sample ID number. The one of the features that was implemented in the original version of the Apex card is this new button called analysis. And what this does is that anytime a sample is re analyzed, which is the functionality you can do on the data review screen, it's going to increment each analysis number here so that I can always go back to my original sample analysis, view those results and re establish that if we want to, this is going to give me a full traceability for the original measurement and original analysis. Well, having the ability to refine the analysis if we find that an adjustment might be needed. Finally, if a sample is approved, it's going to go ahead and apply a digital signature to the document. This digital signature is going to have the approvers name the time and date associated with it. And optionally a comment can be added at that time as well. With Apex Guard, specifically, the Change Authorization audit log, I think are one of the most differentiating features. And So what Change Authorization dialogue does, you can see it right here, is that anytime a system change is made, in this case it's demonstrating changing a procedure, it's going to generate this dialogue. And so this dialogue is going to show what parameter is being changed, the description of that parameter, what the previous value was, and what the new value is being changed to. So in this case, the NIUI library is being changed from Nii demo to standard Lib. The time preset is being changed from 300 seconds to 150 seconds. The tentative NID library is also being changed. If configured, a user must enter our comments or justifications as well as we verify their user password to ensure that they are the person that they say are they are making this change. All these changes then also get reflected in the audit log, which is snapshot as shown here, where I have a particular event ID so I can see exactly which event this was. Again, the time and date that's being applied, the user who made this change, the details about the change, and again the values before and after to have that full traceability. The entire event log here is available in one report. It doesn't have a filter field, so you can create a sub report based on the information you want, but it is all consolidated in one location. So we do also have developed what we call training tracks, which is a series or a progression of various training classes to help build up the expertise of users. And what I want to highlight here is that we do have a couple training classes that we've developed since launch specific around this product. And 1 is this SU-675 Apex guard operational training. This is going to go over the full Apex Gamma product or the focus on the Apex Guard features as well as our Apex Guard algorithms class. And so this is going to look at the Genie and Apex algorithms, but most importantly, look at the new features and the new algorithms we're implementing today. So if you're interested, take a look at that resources panel. There's more information on that for you there. So we have a poll if you feel like populating a little bit help make sure that that we're giving you the right feedback with the session. But this point, go ahead and take a read, answer it and we'll transition a little bit to Part 2 of our webinar topic, which is talking about reading new clinic impurity. And so from my perspective, it's, it's useful for me to reframe a little bit about what I mean when I talk about reading new clinic period. I know that many of you on the call today live this every day and are very well versed in these different topics. But just to refresh, what are radio isotopes and, and what are their purpose? Well, they're developed and produced for both radio diagnostics and radiotherapies and recognizing that there is a supply chain of the development and production of these radio isotopes where the isotope itself will need to be produced before being bound to a radio pharmaceutical molecule and administered down the supply chain to radio pharmacies and patient care facilities. So what does Apex Guard fit in with this picture? What really focuses on the front end of the supply chain where we originally go ahead and and produce those radio isotopes that get used for radio therapeutics and radio diagnostics. And so our challenge here is and, and what we're trying to address is how do we demonstrate that those isotopes, they're being produced for cancer treatments and cancer by diagnostics are as pure as they reasonably can be. How do we report the, we need nucleic purity and ensure that they don't have other contaminants, radioactive contaminants in the sample. If we do our job well on the front end, then that has a a great impact all the way down the stream. There's a few reasons of why we might care about radio nucleic impurities. One particular radio isotope fluorine 18. This is generated by a bombardment of oxygen 18 enriched water in a in a cyclotron. And so we'll have a a proton hitting our target producing our radio radio isotope of interest, fluorine 18. This is good, but the proton also can interact with the various other structures of this target, creating a range of other radioisotopes that we do not want. That is not something that we want to be present at any significant quantities in the sample that we're providing downstream to the radio primary local supply chain. So how do we demonstrate that these are not present at any significant levels? Another example here with Lutetium 177, so this particular product is generated with indirect production by radiation in your terbium with neutrons. So in this way, we're able to produce the terbium 167, which is going to go ahead and eventually decay to our target, which is your 177 with a 6.6 day half life. But it also can create a couple other products such as Terbium 169 and your Terbium 175. And so these can be of concern, especially when we look at the half life of uteria 169, which is significantly longer than the tissue 177. And so how do we demonstrate that this is not present at a significant level in our period in our sample that we're that we're manufacturing as we are rolling out Apex Guard 1.0 with the data integrity functionality? These questions continue to come up. And this is where we start focusing to say how can we as a software manufacturer provide a solution that's going to help answer this question. With that, I want to hand it off to Henrik to take it from here. Thank you. Thank you, Kara. As Kara said, we the objective of this radionuclidic purity measurement is to determine if a set of radionuclide impurities are below a percent limit of the primary radionuclid activity. And that's what we call the radionuclidic purity. So we have to, to do that, we have to do some measurements. And for the impurity radionuclides, we use gamma spectrometry, but potentially you could use something else. But the in this case, we're going to talk about, you know, gamma spectrometry and for the primary radionuclides, we can also use gamma spectrometry or something else, which could be a previous measurement and with gamma spectrometry or something else with a different instrument, for example, a dose calibrator. And then if we want to calculate the radiant clinic purity, we just divide the activities, right? But what about the uncertainties? Every single measurement of a activity has an uncertainty. How can we take that into account and what do we want to do when our software don't detect that there is an impurity? What do we want to do in that case? So we looked into this and tried to come up with a solution to this that is defendable, that is easy to use and easy to interpret. So I'm going to guide you through our our reasoning on this. So here, here's the section on our impurity analysis, the theory and the algorithm. So we what as we said, what we want to determine is the ratio of two activities and we want to include their uncertainties. So we're going to have two measurements, one measurement of the activity with an uncertainty of the impurity rating influx. So the measured activity, one backerel in this case is the most likely activity, but we also have an uncertainty which is usually determined by A1 Sigma uncertainty. And that will determine the width of this thing which you call the probability density function. And that we don't know what the true activity is, but we know what the most likely value is. We know how wide this would be. And then we can know what is the probability of the having this measurement with this activity and this uncertainty, what is the probability that the true activity is. And that's a function of the true activity. So then we do the same thing with the primary rating in play. We get a, a measurement of the of the activity and that's the most likely value. And again we have. A1 Sigma uncertainty that will determine the width of this distribution and what we then have to do to get to know what the activity ratio is, we have to divide. These two, and since these are not numbers, they're probability density functions. That means that we have to do a convolution of these. 2. And then we get a, the probability density function of the true activity ratio. And it's going to be the most like have a most likely value. It's also going to have some kind of width. And it's and then we need to come up with, OK, how can we compare this to something? But we to make it matters even more complicated, we know that no activity can ever be negative. And when you have a very large relative uncertainty, you might come up with a non 0 probability of negative activities. So we have to deal with this and we can also then get a non 0 probability for a negative activity ratio, which is also unphysical. So we have to deal with this. So the first thing we have to determine is can we find a the equation for this probability density function of the activity ratio. And we started looking at this, we did some simulations. So we do random numbers from 2 normal distributions. So that will be the like these two normal distributions. And then for every pair, we divided them and plotted it on this. And that will give us what the probability density function looks like for the true activity. And now we see if we can find a function that reproduces this. So we're going to show, look at a little video we made of this. So in the top left corner we have the impurity radial nuclide and we draw a random distribution from their top, top right, we have the yeah, primary rating implied in the bottom we have the the ratio of these two. So we draw a random number from these, divide them. And then we see we as we do this many times, we see we build up this this distribution here. That's the probability, the density function of the ratio. And something we can see immediately is that it's non symmetric. So here's the end of the of the the video. So we see here this bottom one, it's hard to see. Well, the bottom one is non symmetric. And we can also see that we have a blue curve here that reproduces exactly the simulated activity ratio. So that's great. Now we have this. So again, we want to determine the ratio including the uncertainties. So if we do this again, we we get our probability density functions and how we deal with the non 0 probability of a negative ratio is that we we follow the ISO 11929 standard by setting this to zero. And then we renormalize the rest of the probability density function so that the total probability of having a true activity ratio that's zero or above is still one. Then we have a probability density function of the true activity ratio. What we do next is we define this thing we called maximum potential percent impurity and we define this as the value of the true activity ratio that we are 95 or of the activity ratio that we are 95% sure that the true activity is below this value. So the green part here is 95% of the values and 5% here is is above this. And this is really this 5% or 95 is the what thing statistics call the the alpha confidence and the 1 minus alpha confidence. And in the software we have this as a set of both parameters. So you can set how large you want this thing to be here. So now that we have. A maximum potential percent impurity. Which is 1 number which is the Given those two measurements, what is the value of the true activity ratio that we are the activity ratio that we are 95% sure that the true activity ratio? Is below that and that's our maximum potential percent impurity. And this is something then that we can compare to a limit. So if you have to show that the impurity ratio activity ratio is less than a specific value of the primary radionuclide activity or the total activity of a sample, that is the value that you would compare to. And if this value is less than the limit, then you are 95% or more certain that the true activity ratio is below this. So before you ask, so how is this different than the MDA? Well, you have to go back then and think of what is the, the MDA and Curry and later ISO 11929 refine these a little bit. But basically the they try to answer 2 questions. And those two questions are, is the measured signal large enough that we are 95% sure that is not consistent with 0. And that's what Curry called the critical level, it's called decision threshold in ISO. And the other question is given a background measurement, how large does the activity need to be that we get a signal above the critical level 95% of the time? Those are not really the questions that we are interested in for radionucletic purity measurements. We really are interested is the signal small enough that we are 95% sure that it's below this limit. So that's the real question that we wanted to answer. And if we do, if we had look at this, we have the blue curve that is the observed signal when there's when the the physical phenomenon is not present and you get if you repeat a measurement multiple time, you get a distribution looking like this. And then you define the critical level is, is the value that 95% of the time you're going to be below this value if the signal is not present. The MDA is defined as what's the activity that if you do multiple measurements, 95% of the time your signal that you're going to measure is going to be above this critical level. So if you have an activity at the MDA, you get this orange curve here if you do repeated measurements. So really the, the bigger difference is that if we use this MPPI and calculate our ratio, we pick from this blue distribution. While if you would use MDA, you would always pick this value as your activity. And this has some implications that we're going to see on, later on, on the, on the specific value that we will calculate. All right, that's great. So let's give it a try then. And we tested this on Monte Carlo simulated spectrum, which is very nice to do testing with because we know then what the true activity ratio is because we decided what it was. So we simulated using something called MCMP and 20 milliliter liquid scintillation vial at 10 centimeter from a 30% P type detector. And we generated 360 ten second measurements and then we summed them so that we get a spectrum after 10 seconds, 20 seconds, 30 seconds and so on. And then we analyzed every spectrum to get time evolution of the calculated quantities. And this figure here you see what the spec the simulated spectrum looks like for lithium 177 and that does not have any impurities. So these are very realistic spectrum. OK. So when we analyzed these, we did it 2 * 1 with the simulated spectrum that did not have any of the impurities of its turbine 169 in this case, and the other one. So that's the left hand side is. The other one is the where we simulated when the activity of the impurity was at the limit, which in this case was 0.01%. So if we look at the left hand side to begin with, the blue curve here is the MPPI, the new algorithm, the orange curve is the MDA divided by the primary activity. And we can see two one thing immediately, the blue curve is always below the orange curve. So what does that mean then? That means that we will we will be able to determine that this sample which is pure is below this limit with less count. So this will go below it around 6 or 700 seconds. The the orange curve will go below the limit at around 2400. So that would be a reduction in count time or a factor of 3 to 4 using this new method. And then if we look at the what happens when the impurity is present, again the blue curve is the new method. The orange curve is the MDA divided by primary activity. And then I also plotted just the activity ratios if we do not take into account the uncertainties. So again, we see that the the blue curve goes down fast in the beginning and then it goes down towards this value that we know that the true activity ratio is, but then and then it stops going down and it kind of hovers around here. But what is important is that this is not ever goes below this limit. So wherever we stop our measurement, we will not we will make the decision that we are not 95% sure that the the activity ratio is below this limit. If we look at the MDA divided by primary activity, it will just go down and then it just continues to go down. And that's because the software happily just continue calculates an MDA even after it has found it. But then and then it calculates the MDA as if what would the MDA be if this signal that we found was not present. So it's not very useful quantity to measure it with. But then you can switch to just using the impurity divided by the primary activity and not taking uncertainties into account. But if you do that, you get this green curve. But because these are very high uncertainties, the measured value are are actually below the impurity limit where the true activity is. So if you don't take the uncertainties into account when you're close to the limit, you can actually make the wrong decision. So that's why it's so important to take these uncertainties into account when you do these calculations. So I after that showing all these graphs, I want to just show that there is math behind this. I'm not going to go through this in very much detail, but here's the math. We start with A2 normal distributions. Here's the cock full of expression for the the ratio. There's a simpler 1 and here which also has a cumulative distribution function, which means that that's the integration and that's how we can determine what the MPPI is. We're following the ISO 119 formulation on how to deal with this non negative prior distribution. So we multiply it, renormalize, here's our normalization factor and we would end up with the probability density functions and cumulative distribution functions. And here we get and we plug in this, we solve for this MPPI and we said that that should be equal to 1 minus alpha. And this alpha is something that you can determine and typically it's 5%. And then we solve this and you get the MPPI out. So how do we determine the rate in Euclidic or the activity when it's not found by the software? So there's the software didn't determine that there is a peak there. Well, you can always calculate an activity even when the radiant when there's no peak there. And this is what we do. So here's here's the region around the three O 7 KV peak from Eterbium 169. And we have this region of interest that's this green dots here. And we can just tally the number of counts in there, which is the signal plus the continuum. And we estimate the continuum from the sides here. And then, and then we subtract out the the estimated continuum from the total number of counts and we get the the signal. So that's the peak area. And then we can transfer that using calibration factors and and intensities like normal. And we get an activity. In this case, we get a negative area with 100% uncertainty. So that means that it's it's consistent with 0, but it still is a measured value with an uncertainty that we can plug into our math and get the the ratio on it. So the last example I want to show is if that you can use this in many different ways. So 111 thing you might want to do is, is determine the actinium 227 impurity for actinium 225 and that's a long half life that can be produced together. It's a low intensity gamma rays and thorium 2, but it's daughter thorium 227 is higher intensities and an 18.7 day half life. So what we then can do is you can let the daughter thorium 227 grow in and then you can calculate what the impurity is from the ingrowth and you can do this measurement at different times. And if we do that with the using this MPPAI method, we get this blue curve. If we get the use the MDA divided by the reference activity, we get those orange curve. And if we if we say that we wanted to show that it's less than 0.50 point one, we will do that with about a less, just less than 5 days of ingrowth of the thorium 227. While if we would use the MDA, we would have to wait seven days. So you can also not just reduce the count measurement times, you can also reduce how long you have to wait to do a second measurement. So that's all I have to say on the theory of the algorithm, and I'll hand over again to Kara to talk a little bit more on how this is used in. All right. Thanks everyone and thanks Henrick. Hopefully Henrick convinced you that the math is sound for this particular topic and that the MPPI, the maximum potential percent impurity is, is a good way to go about being able to address radianucleotic impurity measurements. What I want to do in the next section here is go ahead and actually share my screen. I'm going to walk through a couple examples or or steps of how this particular algorithm can be implemented in the Apex Card software. Feel free to drag your screen off to the side so you can follow along in the slides while I share directly in the user interface here. So probably our first concern is how is this report implemented? So let me go ahead and open up an example. Here is my search field for the reports. I'm going to open a sample that that we have available here. One thing we wanted to do with the report was we wanted to have it be as easy to use as possible to give the information that's most relevant and only that information. So what we've implemented is report template. You can add additional sections as needed, but the the most consolidated view has our header information, which is the same as before. We have our sample ID or description that you would have entered when you acquired sample, some information about the measurement itself, some of the measurement analysis parameters and calibration references. This is a new thing that we've added the analysis number. This just helps indicate if you have reanalyzed two different items all by the approval. As I come to now, the second page, we have a 1 consolidated view that's going to go ahead and display my radial nucleic impurity. The radial nucleic impurity is going to have the activity reference. I'll dive into this a little bit more in the next section here as well as what our alpha confidence is. So that's my 1 minus alpha is typically 95%. It's where we draw that that line on the activity ratio distribution as well as which library is referenced. In this case, we have the the reference state defined and then a listing of all the radio nuclids that we're concerned with. With respect to the impurity analysis, we have our primary radio nuclide clearly identified as well as the activity and the activity uncertainty used and then a listing of the the impurities that have been analyzed. Now the way that we actually implemented this algorithm, it is generalized and by that I mean it's not specific to any unique rating nuclide or impurities. These are things we'll set up in the library. And so you have control to add or subtract different impurities or to support a rating nuclide that is, you know, not one of the ones that we've been commonly referencing so far. In this case, we have the two impurities with the MDA divide by the reference activity shown in the 3rd to right column. And this is really done to help you compare 2 results that you might have done so far. See how you know for your existing processes if this is consistent. Next we have our MPPI with what that calculated percentage is. And then finally on the far right hand side, a user defined impurity limit level. So this is something that we'll configure in the software. And what's nice about this is we have side by side, you can see the impurity, the MPPI compared directly to the limit. We also sum each of these impurities. So this is the summation of each of the individual MPP is and then we can define a limit for all the impurities in the in the software or in the analysis. This sum of impurities is not simply the sum of each of the individual writing new cloud limits, but again is something that can be set up in specific for your application. If all of these tests pass, and by that I mean if all the MPPI values are less than the user defined percent limit, well generate the statement that says sample analysis results are below the specified written click impurity limits as of this particular reference time. And then we're also calculate what we're calling a minimum rated nucleic period, which is 1 minus the the sum of the M periods. This is not saying that the rate of nucleic purity is 99.992%, it's saying it's at least 99.992% with a 95% confidence level. If you forget what the MPPI is, we do include that definition right at the base of the report as well. So if you're giving the report to someone who is not familiar with this concept, you can always go ahead and reference this definition right here. You'll notice that I have a second section here that I skipped over. And one of the new features that we implemented actually is the ability to display, calculate and display the results to reference states for one analysis. So this one reference date, perhaps you might do it at the time of analysis. And the second, you might want a feature date that's the time of injection to see how that might occur. And this also demonstrates another feature that we have here, which is the what happens when a particular result is above the percent limit. In this case, the uterbium 169 MPPI now is calculated to be .019%, which is above my user defined limit of 0.1%. Two things happen. 1 you can see that visually it gets clearly identified in yellow, so it's very easy for a reviewer to identify where a a result might be above the limit. And 2nd, we no longer have that statement listed saying that everything is below the limits. OK, let me go ahead and change to the next slide here. So one question you may ask is, well, how do I set up these ready new quads? And so to do that, let me go ahead and go to my setup screen. I'm going to open up our new client library here. So my editors and my library I can access where I would configure a particular ready new client library. So let me open up my Leticia 177 example library and there's two edits that have been made, our two major enhancements. One which is useful to know is we now have what we call our interactive report available. This interactive report actually borrows from our GE 4 point O implementation of interactive reports. We did our webinar on that, I believe last year or the year before if you have some more questions or interests. But what this does is it gives us a different way to look at this information. And specifically what our interactive report shows is the not only the the intensities of each of the different lines, but it's also going to show me what my impurity analysis category is, my impurity limit, and also if it's been used in my weighted mean or impurity analysis. And what we can talk a little bit more about what the setting is. One of the nice features about my interact report is that I can easily toggle the view to sort by different parameters. So in this case, I can bring the largest intensity up to the top, or I can continue to sort by my energy line. But this is something you can print and store with your records as well. So for any given reading required, the way that we implemented the ability adjust these for apex guard is this new function called impurity analysis. So this menu option is going to launch and it will be where I can identify whether any radio nuclide in the setting should be an impurity, a primary. We also support a primary surrogate, which is a radio nuclide activity that can be used in place of the primary if it's a daughter of the primary reading quad or simply exclude it all together if it is defined as an impurity. Right here is where I'm going to find that percent limit that we saw on the far right hand side of the report. And we can exclude it all together if we want to simply report on what the MPPI is, but not challenge it to any particular limit at this time. OK, so second up, let me talk a little bit about that reference activity. The way we implemented the MPPI analysis is as an analysis engine that is called within an ASF. So right now I'm going into my analysis sequence editor, opening up analysis sequence that we have created. The MPPI analysis is a post an ID step. By that mean the by that I mean you would follow all the standard steps you would for any ready new quite analysis, a peak search, a peak area determining the activity of their peaks as well as an MDA calculation. And then we can run this new step called my post An ID Processing for MPPI analysis. If I open up and look at some of the options here, this is where that alpha confidence value of 5% can be defined. This is also one of the two places where we can set up what the total limit is for any given analysis. And then on the bottom I have a listing of the percent of the impurity activity relative to and by that I mean when we put in the denominator of the activity ratio distribution, what are we considering? Are we considering only the primary activity? Are we considering all the activity in the sample? And often cases those can be very much the same if the primary activity is dominating the distribution, but not always. This gives you the option to choose the different ones. And there is also an option here called user entered activity measured externally. This is something where if you have a measurement from an earlier gamma spec measurement, not the one that you're using, you might want to be able to use it here. Or if you're have an activity measurement from a non germanium system such as a dose calibrator, this is where you can then take the results from that and use that for your reference calculations. I'll show you that in a few minutes here. So let me switch now to my my samples set up and my procedure set up. So we've covered configuring the library and we've covered the analysis sequence set up options. The third step is creating a procedure and that procedure is going to reference the the the Radionuclidic library here as well as my analysis sequence. It's also going to reference my new report. If I talk of the next screen, I have my sample set up, script selection. This is going to be where I choose whether I want to do a standard or period analysis, one with two dates, or if I want to create that field, it's going to let me enter in that external activity. Let me show you what that looks like. I'm going to go ahead and choose a sample. Let me create a very meaningful sample name. And if I choose that same report, this sample script has selected a dual reference date, and so this is where for each specific sample I enter, here's my primary reference and then that secondary reference. Meanwhile, if I select a script that has an external reference option, I'll get a screen that looks like this where I have again my reference date, but now I'm able to enter in an activity from some other measurement altogether. I do want to note that it does also require that uncertainty because as Heinrich said, the the uncertainty measurements are are really critical component of this analysis. So let me stop sharing here and return to the slides. There is one other feature that I want to highlight here. Oh, I missed it. Here we go That there is also an option to have a delayed start if you look right here, in the case that you want to have a sample start not at a particular amount of time after you place it on the detector, but a rather particular time after, for instance, finishing a chemistry step, you can identify it to have it start today, tomorrow or what not at a specific time. Let me now hand it off to Henrik to discuss an example of running this analysis with a particular reference activity. Thank you. Thank you. Kara. One example where you want to use this external reference activity is if you measuring 4 and 18. So typically what the one method of measuring it is to do an initial measurement when the, when you have 4 and 18 and it's very high activity and then you do a second measurement and two days later or some other time later. And then you can then foreign 18 is decayed away and you can see if there's any, any radionuclides left. And those would be the impurities. So in this case you would enter the, you would do the measurement first with a foreign 18 present and that could be done with gamma spec or something else. And you will remember, you'll remember that activity and and your uncertainty. And then when you do the second measurement, you input this external reference activity from the previous measurement as the the reference activity. And as Kara said, an uncertainty is required because every measurement has an uncertainty and this activity for the external needs to be entered at the sample because the software will not do any decay correction. So it has to be entered at the sample date. And then the assessment of these impurities will be done using the external reference, even if that narrating Euclid wasn't found in the in the gamma spectrum. So we see here an example. We have measured 29 micro carries of for an 18 with an uncertainty of 2.34. And then we have analyzed our gamma spectrum for these radial nuclides that are listed here. And we calculate an MDA divided by reference activity and an MPPI for all these radial nuclides and we sum sum them. And in this case, we wanted to compare the sum of all of these radial nuclides to this limit and we can see that this particular sample passed. So this is one example of where you might want to use the external reference activity. So when you do more than one measurement and you can do the first measurement, you measure the the activity of the primary radial Clyde and then you do the impurities at the limit over to you again. Thank you, Heinrich. All right, bring it up the the last five minutes or so here. So let me cover a few more features. I focused on demonstrating the the radionuclidic summary report, but we do also have a detailed report. The MPPI is in fact calculated for each each radionuclide line, and then the nuclide MPPI is the lowest of those particular results. If for troubleshooting or additional validation, you want to review the the line information that's available in this report and also indicates here in in this row of different icons whether or not a particular line was excluded in the MPPI calculation or if if it has some other troubleshooting tips to review. One request that we had received a couple times is can we implement a count to peak area in the software. This is something that we've had available in Genie previously, but not on the Apex system. And So what this enables is if you have identified a specific energy, in this case 661 Kev, which would be aligned with season 137, you can instruct the count where the software to count until the number of counts has reached this area. What it's actually going to do is count to the minimum time estimate how long it will take to reach this target area, count until it gets close and then refine it. And if it is not reaching this area by the time it reaches the maximum, it'll go ahead and start pretty good. When we introduced the MPPI approach, the immediate next question was, hey, that would be really cool if you could count to that limit. So indeed in a very similar way to the count to peak area, we have a similar functionality for count to impurity limit. And So what the software will do is it'll count to the minimum estimate how long it will take to reach that user defined limit for each of the defined radio new clients. If it reaches them all, then I'll stop at that point. I do want to note that the MPPI does fluctuate. It is a statistical result. So this is something that you might want to look at a little bit carefully and and help think about how you use that in your procedures. Major change that I think a lot of users will like is our behavior with one of the permission settings going forward. So we intentionally with our initial release of Apex Guard do not allow users to access the detector MCA outside of the Apex interface. That's because we want to control access for data integrity and we want to control any changes in the audit log. That said, we think users will be interested in this functionality that might not need that Part 11 compliance and so and would like to also be able to continue to toggle to the MCA in a Genie environment for troubleshooting or research or other applications. So this is now changed to be a settable feature. Now this does require administrative permissions to change and it is recorded in the audit log, but it does open the door for for a little bit wider application as well. 2nd to last slide here, there are a range of startup assistance services that we have available to help implement the software. We have IQOQ services available both for new systems and for updates and the update says service engineer can perform the software update for you and then re verify the software functionality. As I mentioned earlier, we have a range of training classes available, whether that's for to develop your subject matter expertise or to train up a wider range of the team. And we do also have these applications set up assistant services. So what these are is we have a a group of specially trained service engineers. They often lead are training classes, for example. And we've enabled them with different libraries and other tools that Henrik and some of the other physicists have developed to come up with some really good recommendations of what we think the the optimum library settings might be. So these folks can come on your site, work with you to understand exactly what your needs are, and to tailor a library for you. And other functionality as well as a last comment. So Mirion is a a growing company and and we do have a pretty large range of other companies that are in our medical realm. Cap and tech is 1. EC Squared is another if you're familiar. And so that gives the opportunity for these different products that to work a little bit more closely together. I think we had one question in the chat about operating with cap and tech dose calibrators. And so using the external activity is one way that you might input those results. But EC Squared also offers the Bio Tracks QMS software, which interfaces directly with some dose calibrators as well. These are things that we can integrate together to make the workflows even more smooth and easier to work with. So one last slide here, but take a a chance to answer the poll before we start switching towards some questions. Let us know if you might want to get some more information. Remember, we have the resources paying off to the the left hand side For more information for you to review. But let me come to to a closing thought here. I know we're just at that hour mark. Mirian's mission as displayed here, is to harness our unrivaled knowledge of ionizing radiation for the greater good of humanity. As Henrick and myself and and a number of other people have been working on this project, this is something that really resonates more strongly than a lot of other projects that we've had the opportunity to work on. You know, we're very excited to be able to have a software that we think is going to make your operation smoother, make it a little bit easier, reduce counting time in some instances as well too. But this is really exciting because I think it truly has the ability to improve the quality of some of the products that that we are making and and that you are making for, for cancer treatments and things. So that's not very often. And in the various product releases we work on that we get to say that so directly. So I, I want to emphasize that little bit here and celebrate it a little bit with that note. Let me now toggle to questions. We are at the end of the hour. This will be recorded. So feel if you're a drop, if, if you're all set. But this point, let me go ahead and switch. Feel free to add more questions as you might consider them. All right, Henrik, I'm going to read this particular question for you. How will it handle decay corrections? What if the impurity has decay significantly, IE near or below the LOD? What if the percent limit was above the limit AT20? Will it still be able to confidently flag after significant decay? So the the you're going to input a sample date and all the activities that are going to that are measured is going to be decay corrected to that sample date. So that means if if you have an activity at a later point in time and you want to know what the impurity was at the previous point in time, the software can do a decay correction backwards to that time. It could also do decay correction forward in time. It's of course best to do a measurement when the where you have as much of an impurity compared to the total activity as possible. That will give you the best answer. So that's like the example where we have the foreign 18. You can do 2 measurements and do the do the second measurement where the primary had to Kate away and then the software will do the decay correction back to the original date. Great. Thank you. We have another question here about. I'm going to push up this previous slide. We had a request to talk a little bit more about the the key points of measuring the impurity when you do have impurity near the limit here. So just so the right hand figure then correct? OK. So the, the, so this is a simulation. We know what the true limit is, and that is this red line and that is the true ratio of those two activities. And that's the one you want to show that it's below. And in this case, you can't show that it's below because it isn't, because that's where it is. So what we can see on the blue curve here, the MPPI curve is that it goes down initially and then it's when it reaches the true activity ratio, it actually stops going down. It starts hovering a little bit. And you can see here it, it doesn't really go down significantly. And then it doesn't matter then what, where time you would stop your, your measurement to say you would stop after half an hour or an hour. You would still draw the conclusion that the, we are not 95% sure that the, the, the true activity ratio is below the limit, which is what we wanted to show. However, if you one would use MDA divided by primary activity, this, this measure, just this measure alone will continue to go down. So it also starts above the limit, will just continue to go down. And at some point we'll find that there is a rainy nuclear present because it is present, but the MDA will still be calculated by the software and it will just continue to go down. So in this case, you would have to switch to something else. And if you would switch to just taking the ratio of the activities and not taking the uncertainties into account, you get this green line here. So because the uncertainty of the impurity Arabian nuclei is so large, it it will actually, if you just uses that measure to determine if your activity ratio is below, you will make the wrong decision because this uncertainty is so large that the the uncertainty band on this is actually so large that it it encompassed over the limit. So the the I guess the main take away is that using this new method and you kind of guard yourself against making the wrong decision. Let me ask a follow up here. There is a good amount of chatter here in our Q&A, but one of the comments was that the blue line seems to be approaching the red line. How, how much of a margin of errors there? How? How can we be confident that it's not too close? So that really is that alpha parameter. If you set that at 5%, then it might go closer than this and and if you set that into a higher value, it's going to push this blue curve up. But there is. So that's, that's basically how this what would happen with these? What would happen if you change that parameter? But also, since we say that we can accept this is really what that alpha parameter means is that we, if we do many, many measurements exactly at the limit, we can accept that we're wrong 5% of the time. So that really means that you might go below this and but only in 5% of the times, and only if the actual limit is or the true ratio is actually exactly at the limit, which is very unlikely unless you use a simulation. So if it if it if the true ratio is somewhat above it, the probability of being using this method and be below it, it goes down to 0 very quite quickly. And so if you want to be more conservative, you'll reduce that alpha as needed. Correct. Thank you. So, so returning to the the earlier question, there was a follow up as well saying what if the amount of activity present on the decayed sample is straddling the limit of detection for that isotope? Does the MPPI become less certain? So it's not that the MPPI becomes less certain, it's if you're having large uncertainty will push up the value. So the if you're having a large uncertainty on the measured impurity that will push up the the value of the MPPI. So it's not less certain, it's just that it will push this for hot to a higher value. Great. Thank you. Here's a good question. How is the MPPI calculated for unknown impurities? A peak that doesn't belong to any new quad in the library, for example. It will not calculate anything for that because the correct way of of doing of dealing with those is identifying what that peak is. If you don't know what that peak is, you cannot calculate an activity and you cannot calculate an an in in like a ratio of these activities. So the the the what I would recommend if you have an peak which you don't know where it comes from is determine where does this peak come from and then you can calculate an an impurity ratio. And I, I would add as well to that on our reading new Quad summer report, if there are unidentified peaks in the spectrum that that come up on our peak search report, it will indicate that clearly on the summary as well too. It'll be a asterisk that says, you know, all the sample results are below the limit. But Please note also there are unidentified peaks to resolve. So that, that was a really good question and important part to make sure that is addressed. Thank you. Another question here, what reports do I have to generate at T0 and after 24 to 48 hours, will these be different report templates or the same? And to help assist this question, let me go ahead and switch to one of the slides that has a comparable result here. Henrik, did you want to talk to this or would you like me a ticket? You can talk if you want. OK, so here's an example. It is one report template. So in this example here you have your your initial T0 reference and then a later reference what the what's required is on that sample set up screen. You do want to select the dual report option and once that's done, you'll enter in the two different dates. The calculation will be done automatically and displayed in one report for you. OK, let me take another question here. The question is I'm going to read a verbatim. Even if all those equations are correct, do you think those equations are correct, Henrik? Yes. Do you think that the FDA will require validation that they are implemented properly? What kind of validation and has the MPPI already passed such validation? And perhaps we, we can talk about this together. The, you know, we've, we've demonstrated primarily through simulations that the equations are giving the correct answers. But I also recognize that simulations and real data are not the same thing. So the way I expect this to work is that individual users will validate the method for the way that they are operating and producing their writing new client. And I hope that the MPPI approach will be part of that validation. We are happy to partner with any customer and any user of the site and work together, whether that's preparing papers or presentations or other studies. At this time the we are patenting this approach so that patent process is almost complete, but also recognizing there is no replacement for real measured data. And and that's where we have to work with you to help ensure that this is recognized as as a useful and appropriate way to do rated nucleic measurements. We do have a question. Does Apex Guard have any compliance claims other than part 11? Let me clarify that when the Apex software is implemented at a site, there are a number of setups and features that have to be configured to be Part 11 compliant. So the software itself is not a standalone Part 11 compliant software. It's when it's implemented and and configured with your processes, but that it's a tuple. But that said, in addition, the Apex VMR product is manufactured out of our ISO facility. So we have a range of different software compliance and, and software manufacturing processes that our software quality assurance team has on the items that we can add to our resource list is our certificate of compliance that we have for any software produced by our our US facility. And this includes things like good manufacturing for software products, ISO compliance and and other references like that. We have a a question here that just came in. Do these reports still show information such as background, subject, peak analysis, etcetera, identifying that this is just a summary report? And the answer is yes, this this is a report template and or report subsection. So it is very easy to go ahead and add the other sections that you might want to do. All the previous reports exist. You can just simply add it on to the end where you might want it to be. Great question. Another question here, when the first electronic signature of a user session is applied, are both E signature components required username and password? The answer is yeah. So the the way that it's configured is the username is the user who is logged into the software. So that is configured as one of the user groups in the software. So for instance in in our demo software, we have French Flintstone, he is associated to be a administrator for example. And so French Flintstone has a username or has a password that has been configured by the corporate IT department when he logs on to the software. Fred types in his name and then his password to log in. When he's doing a review of a sample result and then saying I approve the name of the approve button, a pop up will be generated. That said, this is Fred Quinstone. You approve the sample and the password that is associated with Fred's user account must be re entered. When that is done, it's going to push the electronic signature onto the report. A slight detail here. Fred, for instance, might be the friendly name that is logged on to the the system, but the electronic signature is going to be the official name. So, for instance, Frederick Winstone would actually appear. On the report. OK. Well Henrick and I are here for a few more minutes. I believe I have addressed and Henrick has addressed most of the questions. That's done. We have one question here that that all the have Henrick addressed. Before we wrap up our session, what are some of the things that one might optimize in the library for the best MPPI results? That's a great question. So there there's many things you can optimize in in the library. So you can pick if you if there is a particular line that you want to use for the calculation, you can pick that one and by setting all that all the other one should not be used for the calculation. The software will then still calculate an MPPI for that, but for those lines and review on the detailed report. But for the radium nuclide, it will pick the one that you actually shows which one that that it was. And another thing that you can do in your library which kind of comes back to the some of the unidentified peaks is that because what we've seen is a lot of times the samples have very high count rate in the detector and that this can create additional peaks in the spectrum that actually comes from this primary Radian. These can be some peaks and they could be coincidence. So random some peaks and coincidence some peaks. Or they could be single and double escape peaks and they could also be. X-ray escape peaks and you can put those into the library and and just check those that don't calculate the MPPI for these these lines. But then they will be associated with that Radian nuclide and you will remove those peaks from any list of unidentified peaks in your spectrum. So that's those are a couple of ways that you can optimize your library for for the best performance. Thanks, Henrik. So I want to take this moment to thank everyone who's still on the line and especially thank the folks who asked great questions. Feel free to reach out For more information or share with your team members. We're here to help. Have a great rest of your day everyone. Thank you. Thank you.

Join us as we unveil the latest enhancements to the Apex-GuardTM  Productivity Software. Since its launch in 2021, Apex-Guard has been instrumental in helping radio-isotope producers and related industries meet 21 CFR Part 11 compliance for high-resolution gamma spectroscopy systems. Building on the robust foundation of the Apex-Gamma™ application software, Apex-Guard introduced advanced data integrity features along with a comprehensive change control and audit log capability.

 

In this webinar, preview new user-requested features in Apex-Guard V1.2., including updates that significantly enhance the quality and efficiency of radionuclide impurity assessments, simplifying the processes of analysis and reporting. Sign up today to learn how Apex-Guard V1.2 can benefit your organization.