Hello everyone. I am hope and will be helping moderate today's webinar. I would like to welcome everyone to maximizing spectroscopic data efficiency and accuracy with the Data Analyst Continuous or Triggered Spectroscopic Monitoring platform webinar. Before we get started today, I'd like to go over some housekeeping. At the top left of your screen you will see our Q&A chat box. Please ask questions throughout the presentation and we will answer as many as possible within our time constraints. Below the Q&A window is our resource list with more documentation on what we will discuss today. You will also see contact us in there if you would like to talk to one of our representatives, or you can check Chat with Adam in the Tools menu. If you don't want to miss any future webinars, please sign up for our mailing list at the bottom of your screen. All of your windows are adjustable, so feel free to move them around for your best viewing. And if for some reason you are having trouble viewing the webinar at any point, please refresh your browser. This webinar will be recorded and available at Mirion.com/webinars for future viewing. Here you can also see all of our past webinars. This link is in the resource section. Today's webinar will be presented by Greg Landry. Greg is the Product Line Manager for Mirion Technologies in Vivo Products and has over 35 years of expertise in the field. He is a recognized subject matter expert in gamma spectroscopy, in vivo measurements and internal dosimetry. With that, I will now turn the presentation over to Greg. Thank you very much. Thanks everyone for being here. I'm going to go ahead and get started and my agenda for today is as follows. We're going to go over the data analysts which is our continuous spectroscopy monitoring platform. I will go over that quite thoroughly. That book first bullet item will take probably half the. Half the webinar then we're going to go over a few examples in which we've actually implemented the data analyst in some applications that that have been successful there. There are quite a few of them, but we've chosen 3 to keep everything in the context of the current discussion. Following that we'll have a A wrap up, quick wrap up and then AQ and A session. OK, this slide shows the data analysts that we're going to be discussing. The the Wi-Fi version up here at the top and the non Wi-Fi version, Wi-Fi version at the bottom. Both are available. You'll see these images repetitively through the webinar. If I can get my camera straight, I've got one in my hand. You see it's quite small and it's quite light. About 3/4 of a pound. The dimensions on this guy are about 4 1/2 by 3, 1/2 by 1 1/2 inches, and that's rounded a bit. They're not exactly that, but close. OK, so real quickly. The data analyst itself is an autonomous spectroscopic software package. That will continuously acquire data, identify new clouds, and store the results. The package we provide will be embedded on an industrial rated CPU running the Linux OS, and that's again this guy here. If I can find my my camera, there's a list of features that we'll end up touching on as we move through, so I won't necessarily delineate all them here. The I will say that the options listed here for GPS and some other features actually are included in the in the in the package with the Data analyst. I will take a moment to say one thing that if if I say DA I I mean the Data Analyst and if I say CSMI mean Continuous Spectroscopy, Continuous Spectroscopic Monitoring. Just a little bit less of a mouthful. And keeps things flowing a little bit better. OK. So what I want to start off with is a distinction between CSM, discontinuous spectroscopic spectroscopic monitoring and the type of counting we're all probably mostly familiar with. And I'll start with what we're familiar with first. Excuse me? So this is what I'm calling operator driven counting and analysis and here I've got an example of a very simplistic count room. But in this case we have a detector, in this case a germanium detector with its cooler, the MCA and then APC that has the spectroscopy software loaded in this example. Everything is basically communicating through a network. There is a sample on the counter. So a couple things to point out. One is that there will be some operator that will start and stop acquisition review analysis and furthermore there will have been someone who collected this sample and brought it to the laboratory. In the CSM paradigm, we changed things up a bit. We no longer we immediately take out the direct connection to the operator and any spectroscopic software. That's why I have this dashed line here. Not saying that we permanently take it out, but we take it out for routine operation and here we place the detector at the site of the measurement. So it's an in situ measurement. And in this example, I've got an Osprey connected to the network and on the same network we'll have our data analyst. So what's occurring here is this data analyst is operating this Osprey. That's a continuous stream of spectroscopic data that the data analyst will sit and continuously autonomously make measurements. And analyze those measurements and store the data so that you have a continuous stream of data from that from that sample point. And the next slide we'll get a little bit further into the, the details of how this data analyst operates, but excuse me few things to point out here then. So in this case, we we've fundamentally brought the lab to the sample source. And instead of taking this, I know a grab sample to the lab and we have done a number of things we've we've cut down on the need to go get the sample. We've cut down on the potential of, for example, radiation exposure to the sample collector and any potential for spills what have you contamination. Things like that. Additionally, we are collecting a lot more data than we were in the previous case and the operator driven case. In the operator driven case, we had a data point for every time we sent a sample collector out to the sample location and brought that back to the lab. In this case we're able to at whatever frequency we choose. Could be analysis minutes apart, hours apart, what have you. We have a much more complete data set and a much better representation of the the the picture of what it is we're trying to analyze here. So the dashed line represents the fact that if even even though this piece of the system is sort of standalone and autonomous, I can. Connect to the network, communicate with this data analyst and get information off that either continuously, all the time or only when I want to. And we'll get a little bit further into that with some of the other options we have there. OK, So what specifically is the data analyst doing? How is it accomplishing this? OK. So first when the data analyst puts its detector into acquisition, they're going to, it's going to be a continuous stream of grab acquisitions and these are typically set between one and 10 seconds by data analysts that I installed had this set as a default of one second. So you've got spectroscopic data every second being captured, so. Fundamentally, 100 seconds worth of this accumulated would constitute 100 what you would normally consider for example 102nd acquisition on a operator driven system. There's another layer of preliminary assay that's occurring at some wider window where these are collected together bundled into. A little bit larger bundles than the the grabs, but you're getting more information in there. You're getting the spectral data, you're getting any count, rate, dose rate information, internal information from the actual data analyst, from its peripherals such as GPS and met sensors, things like that. Any dose rate information for example from a gamma analyst if that happens to be connected and then that's bundled up. So then we get to the heart of how the analysis occurs. The convention we use for analysis in the data analyst is a concept we call workflows. So we set up monitoring workflows. In the case of continuous workflows, we would set up a workflow for some. Period of time, which you might consider something analogous to what you you know, a preset acquisition time on on. It's on an operator driven system, so I could have for example in this case workflows set at 5 minutes. That means every every 5 minutes in this accumulation of this stream of data there will be an analysis performed and the output of that analysis can be ACNF file and 42 file. Any number of. Of of file options that we have and that will be stored to the data analyst as an option that can be immediately FTP Ed or secure FTP Ed to some location on your network of your choice. So when that 5 minutes elapsed, first 5 minutes elapses, I get an analysis, I get an output, then immediately starts accumulating again, next 5 minutes another output, etcetera, etcetera, etcetera. I'm not limited to 1 workflow. I can have multiple workflows and that's fairly common. I'm all collecting from the same data stream. I could have an additional workflow at say 10 minutes say or 20 minutes. Each of these workflows will have their own analysis specific to them. You know your, your peak locate, peak area, new cloud, IDMDA, the whole 9 yards. They they can either be the same at just different time intervals or you know you can be looking for completely different new clients with the different types of analysis. So that is fundamentally how the, what we call the standard or continuous workflows will operate. We have another type of workflow. And that would be the externally triggered workflow. And these workflows function based on an input TTL signal to the GPIO accessory that we'll discuss in the next couple of slides. But basically you can think of it like pushing us a switch. So one of the triggered workflow types that we have is the timed workflow. And basically a timed workflow will be similar to these continuous workflows in that they will be set for some specific time like 5 minutes, but they will not initiate automatically. They will only initiate when that TTL signal comes in. For I think it's like a fraction of a second or a second. I don't remember the specific number, but basically you get a signal coming in that says OK, start that workflow and then that workflow will execute. Generate its analysis, create its files, do any file transfer it's going to do and then it will stop until the next trigger comes in. And the final type of trigger that we have is called a Gated trigger where the the workflow will accumulate during the period that the trigger is actually in place and then once the trigger is removed. The accumulation will stop and then the analysis will execute and then all these the same results, files being created and transfers will occur. I think that that covers fairly well those types of workflows. The way you can think of the the gated workflow is, you know, sort of like how you're. Silly silly example, but how a garbage disposal in the sink would work. You know you you apply the trigger by flipping the switch on and the garbage disposal runs and when you turn the switch off it stops. That's that would be the gated workflow. OK, so again you could have multiple workflows and operation all at the same time. Process monitors are typically. Going to use the continuous mode workflows where sample assay systems will tend to use the triggered type workflows and we're going to see some examples of each of those as we move through here. Another thing to point out here before I move on is that the DA provides alarming capability, meaning that you can set alarms for count rate. You can set set alarms for dose rate and you can set specific nuclide alarms for activities. If so, If you're looking to not go above a particular activity, you can set an alarm in the Nuclide Alarm Library within the the data analysts. Let's make sure I didn't miss anything here, OK? Take a quick sip. OK. So there is one other operator driven case that I'd like to point out here that sort of applies here as well. So this is the case where you don't necessarily bring a sample from the field to the to account room, but you perform a spectroscopic measurement. In the field by sending personnel to a particular location of importance in the example that I'm using here would be facility hold up measurements when you're looking to see if material is being held up and piping or in ventilation ducts and things like that to make sure you you have full account of inventory of materials. So in the in the picture to the. Left here you see 22 personnel making a hold up measurement within a facility clearly that's at this pipe location. Here you see they're taking extra steps to try to standardize geometry. They're clearly in a contaminated area with sleeving on their equipment. So this is a bit of an evolution to go in and do this and try to get this right. So that would be the operator driven mechanism of performing that hold up measurement. The way that we're typically used to seeing that, So the CSM alternative would be to find those important locations and place a detector that's compatible with the DA along with the MCA. In this case, I've got an osprey and an NAIS with that's on the network. And I'm not in this picture. We've got our DA somewhere and that's sending it some, you know, sending its information from the, I'm sorry, we're getting, we're collecting spectral information from the detector going to the DA. It's running its workflows and it's sending that information to wherever it needs to go if that's the way it's set up. So the advantage of this? Over this is that now I've got a very fixed geometry. The geometry is not moving. I don't need to worry about exactly how I position this detector as I come in next time I'm I'm able to bring eye socks to bear on this problem to generate an accurate efficiency that that can be catered to from location to location. I'm also able to analyze results. I'm sorry, the analysis results are going to always be available. I don't need to wait to have staff suit up and go back in and get me a new measurement. I can pull up that information at any time and I require no repeat entries into a contaminated area in this particular case. So there's a lot of benefit to setting up the system in this manner. All right, So that's that other operator driven case. OK. So. Block diagram of DA based CSM are building blocks. I'm sorry for the DSA or the DA based CSM applications. So detectors and MCAS basically we have, we have a very wide variety of Mcas that are compatible with the DA, so the Osprey. As we mentioned, is compatible with the DA, so any scintillator that is compatible with the osprey can be used with the DA. The links to is compatible with the DA and the aegis is compatible with the DA. So any detector, which is pretty much all of them that can be used with a links to can be used with the DA and the Aegis is directly used with the DA we also. Have compatibility with the GR1 and GR1 plus cadmium zinc Telluride detectors. These are USB devices and they plug into the USB port in the DA and these are some of the first systems that we tested out. We're using these CZT detectors and we look at the example of this in example one coming up in a bit here. And as far as dose rate, the DA can take input from a ECO gamma G and store that information. And furthermore the the DA has the capacity to accept a spectrum to dose conversion calibration to actually get dose rate off of various you know detectors that we've already discussed is the germanium detectors or the simulators. The system doesn't necessarily come with all of those, but that it is capable of doing that. OK, so on the other side, the DA again can collect this information and save this information without communicating with the outside world at all for quite some time. The number is years as far as data collection goes without worrying about filling up this device. So filling up is not an issue results visualization tools that we have. The primary tool for the DA is that and the tool that comes with the DA is the DA dashboard. We're going to have a look at that in a couple of slides away. There's some a couple of additional tools that we that we can provide and this is the this one. So this is the DA dashboard, This is the DA prospector and the re examiner. These are additional tools that you that you can acquire from us. The DA Prospector gives you a bit more advanced visualization and it also can act as an FTP server. So in that passing of information from the DA, you can actually use the DA prospector as the FTP server. The DA Prospector is able to function on both live and stored information for data visualization. The RE Examiner is more of an advanced tool that's going to let you do things like combine the output files, you know some of some workflows together and changed how the analysis is done and things like that. And the DA is compatible with our Horizon supervisory software and you'll you'll see some examples of that. So we can do customized interfaces with Horizon for the DA. And then the last one, which is a newer one that we're going to talk about in example two, is we've got this DA touchscreen console and it's a little bit different. And then from the rest of these, and it's not simply a results visualization tool, but it's actually used to initiate sample counts in triggered mode. So we'll get to that. When that example comes up, we show Apex Gamma and Genie out here because once you've FTP D data off the DA you can certainly import it that into Apex Gamma and execute data review on it. And the same with Genie. You can you can take output of the data analyst and bring it into Genie and do data review or reanalysis or what whatever you need to do. One other thing to point out that Genie is required for hardware adjustment of your network hardware. So meaning like turning a high voltage on or off or adjusting Gaines would be done through Genie. Basically the data analysts, you go into the dashboard, you pause the acquisition and that should allow you to get to this device via Genie. I think that covers that very fairly well. So just to reiterate the simplicity of the system, once I've got a detector and MCA set up with a data analyst, it's connected, set up all, you know, my system, subject matter expert or system manager is set up all of the workflows properly. Basically, you turn it on and it starts working and it keeps working and analyzing and generating data until you either pause it or turn it off. There's no operator intervention needed to make it to make it function. OK, one moment. I've got a question here. Is the DA compatible with other manufacturers MC as such as looks like GB SM CA-527? Actually the answer is no. The the DA is only compatible with the MC as I've listed here and they're all myriad MC as and take a quick look. So I've got a question, does the does the DA have a built in MCA or any other spectroscopy system? And the answer to that is is no. You need to have one of the MC as I've listed here. What as far as MC as the DA does have internal analysis capabilities. So it has the the full Genie suite internal to the to the DA and additionally like the Genie four O release it has Python scripting capability for customization. OK, so move on a little bit here. OK, so the the data analyst package, here's what it what it comes with. So you get your either Wi-Fi or non Wi-Fi version of the data analyst and there's some. You get the Visa in DEN rail mounting to allow you to easily mount this. Wherever you need to mount it there's a GPIOUSB accessory and this is what's going to allow you to receive the triggers. We've been discussing up to two trigger lines that are available on here and also you have enunciator output for faults and also alarms, so you know high, count rate high, New Clyde, what have you. There is a a MET sensor that's going to collect pressure, temperature and humidity information. There's a GPSAUSBG PS:. That's going to allow you to gather GPS positional information and it comes with a a portable UPS that you can charge up and this UPS will give you about two hours of runtime on this data analyst. If you if this is your power, you're at this powering with just this UPS. OK, so another view of this sort of the same thing, just reiterating the information that you're getting during your measurements. Not only just your your your spectroscopic information and your nuclide results, but you can be pulling in in that data stream, your met sensor data, your dose rate from your ECO gamma, your positional information from your GPS and you can be sending out alarm or fault information. Or bringing in trigger information to control triggered workflows. OK. So we're going to take a look at some of the visualization tools. I mentioned a few slides back. The primary tool is going to be this Data Analyst dashboard that and this is the primary screen of that dashboard with the setup functionalities exposed. So what you have here is you've got a live indication of the Spectrum acquisition as it's coming in. You've got a a waterfall plot of the Spectra as they come in with each vertical strip representing 11 spectrum. And you've got a data time plot available at the bottom here that is going to be providing data based on whatever you've clicked on up here while you're viewing this. So for example if I click on this this dose rate meter here I I'll get that information if I click so this one, yeah this is this is dose rate, this is count rate. So if I clicked on this I get the count rate information here here's my new Clyde results from the previous run of the of this long workflow. If I click on one of these new Clyde's I'll get the time data plot for the activity of that particular new Clyde. Up at the top rail we have the the various workflows that are actually active. If you as you click on each of these, that'll change the screen entirely to at least the site entirely to show the output of that workflow. Within this interface you can get to pretty much anything you need to to set up the the system. This is where you would go in and set up your work flows. That would be under analysis settings. You set up the analysis associated with those work flows. Here, you set up the instrument. Here you conduct or import calibrations from the calibration functionality. You can manage your data, archiving data, things like that from the managed data. There's a number of maintenance utilities that are available through here. Let's see quickly. And you have alarm and fault indicators up here. The fault is the F and then you've got, I believe this is rate new Clyde transient and background. So when those light up then we know that we have some issue here. OK, take a minute over here to see what's off to the side. Oh, that's I'll have to get back to that when there's a very, I've got a very detailed question that's been asked here. So Leland Davis, I see your question and let me get back to you with an answer to that one that's fairly, fairly detailed and I don't think I could answer that and not burn a good chunk of my time up. Power requirement for the DA, I believe it's 12 volts. It comes with an adapter 12 Volt DC. If I'm not mistakes either 12 or 14, want to say 12 and I'll double check that. Yeah, 12 Volt. I've got confirmation from one of my silent experts that it is 12 Volt. OK. So I'm going to move forward here. By the way, if I don't get to your question because of time constraints, we'll definitely answer those questions as a follow up. So have No Fear there. OK. The data prospector again, we're still in the visualization tools. The data prospector is used for in depth analysis of live or historical data. It provides powerful visualization, a bit more than is simply provided with the DA. You can start comparing new Clydes and things like that and again you can use this as the FTP server to read data as it's coming off right off the Dai talked about this a little bit. You can recombine output from workflows with this DA re examiner so summing individual CNF files. You can change the analysis, change the graphical viewing detail of the data. This is the probably the most advanced tool that we have for manipulating this data. OK, so another excellent visualization tool we have is Horizon and we have spectroscopy versions of Horizon, the interface with with that software package. So Horizon is a supervisory package that and supervisory packages are used to, from a radiological standpoint, for example, to take input from many different devices, perhaps out in the facility, and put them in a format that makes them easy to quickly determine if there is anything unusual and allows you to make decisions on that information. They typically start with a with a higher level view that will give you some indication that there's a a problem or a trend somewhere and you can drill down into for example, a room or a system or a particular detector from there. The example I've got here comes from an implementation of the DA at a uranium site. I've got the more detailed slides of these three coming up, but basically the supervisory system, it's set up to monitor live data. It will bring records into its own database so that there's always available, there are always available. Then it'll display false warning alarms, etcetera and take a quick look at, so we have in this case 4 tanks. You drill down into one of the tanks and then you get this spectroscopy view of horizon and it looks very similar to the the DA dashboard we were just looking at, but it's going to be a little bit different. These these are customizable, but you see you've got results over here. You got your charts over here, spectrum here, and a lot of different choices that you can make more so than on the on the dashboard itself is this is customized specifically to that application. And here, for example is a report I might get by clicking report button. So a customized report that can be manipulated with these check boxes here. Just to give you a very, very general idea of the type of flexibility you can have here. OK, so we'll step here into some application examples. Let's take a look here. OK, so I have a question. I did have a question. I can my questions are rolling off. All right. I will. We'll have to get back on that one. There was a question about sampling time and if we can go less than a second, I don't see it on my screen anymore. And basically for sampling time, we've tested down to 250 milliseconds, but it's dependent on the MCN number of channels. So without without having the question in front of me, but remembering it, that would be, that would be the answer that I could provide. OK. So the data analyst has been successfully implemented in many applications since its introduction by Mirion around 2017. So we're going to highlight three of these in the context that we've already discussed. All right. And the first one is this nuclear power plant in Situ CZT system and we actually sell this system as a CSMG R1 package. And before I move into the the various pictures of this implementation, I want to point out that when you see this box, this is the early generation data analyst, before it was a nice small box like the the one we have now like like this one here. So and you see it strapped to the tripod here. OK, what's good about this system Assuming ACZT detector is appropriate for your needs and quite often it it will do do the job. This is a quick and easy way to deploy ADA based CSM platform. Very quickly you can do you can take one of these systems and set it up quasi permanently or you can, you know, move it from location to location. Depending on your needs and the problem you're trying to solve, this kit comes with either AGR One or our GR1 plus CZT detector. That's a 1CC detector. It's a small detector, comes with a GR-1 shield, which is a collimator for this detector. It'll come with a rugged eyes, laptop, of course the data analyst and a tripod and not shown in this picture, but it will also come with a rugged case for all of this equipment. So this configuration is one of the first that we were really able to test out in the field. We we did some measurements in cooperation with APRI at at several power plants and basically these systems were placed on on let down lines prior to right at the beginning of an outage for each of these cases. And some of them were there for short periods of time. Some of them like this last one was in place for up to 2 1/2 years and we've got output similar to shown here and the and the idea here is to to to finally give you some idea of what type of data we can get doing this. So here we've got the reactor shut down, start shutting down, a final shutdown and then a forced oxidation event, which is the key thing that's being looked at. In this case, the reactor power is actually can actually be tracked via the Fluorine 18 activity and that's this one going dropping down to to the floor here. My eyes can't distinguish that color difference between these two that's what that is. And then Cobalt 58 is in in this case is is an important new Clyde for for dose consideration. So here we have force oxidation event. The Cobalt 58 has jumped up and then monitoring with the CSM and the DA. The the plants able to monitor exactly when the activity levels of the cobalt 58 drop down to where they expect them to be and they can look at other nuclides of importance to see when they are, when they stabilize and they're able to do that. Without sending someone in to grab a a sample each time. OK. So moving on to the next one, I'm talking just talk a little bit about in situ measurements, talking about ISOCS here we have an ISOCS cart with an Aegis specifically and the use of this, the DA, the the data Analyst touchscreen console that I mentioned a little earlier. So the idea here is you've got an ISOCS cart that has a battery cooled ultra high, vacuum cryostat that will last X number of hours and depending on what kind of battery swapping you do, excuse me. And then you've got this data analyst that can be, I'm sorry this data analyst touchscreen console that can be taken into the field with this unit and the actual DA box, the little black box I've been showing you is inside this unit and then it's operated through this touchscreen panel here little another view here. So with with this we can predefined samples. So if you're going out in the field to make measurements on, you know what whatever item or object that that you that you choose to do, you can have predefined sample set up for that. By samples we really mean workflows. So you're setting workflows up in in this device in this configuration. This can be operated by minimally qualified individuals. You don't really need to know how to do anything other than of course get the the ISOCS detector in its correct position and then choose the correct workflow to initiate and this will provide immediate reporting on the screen, so a few shots of the screen coming up here. This touchscreen can be used for other things besides in situ measurements. I'm not going to go through these right now for time's sake, but you'll have the slides. You can have a look at that. So the main point is that this touchscreen application is quite flexible for those cases where you just want to set up simple sample counting. OK, the interface or the touchscreen looks like so. So you've got your workflow set up over here. This is the result screen from the previous measurement. To start account you simply click the start count and then counting is initiated. If you're asked for any sample information, you click the keyboard and type it in. You can abort count here and a screen that shows some of the setup information. Now this setup information would not be something that you are minimally trained operator would need to deal with. Your operator would simply start the workflow and then fill out any sample information that's being asked. If any one other feature in this application is that one of the workflows is set up to measure a check source. Sort of like a AQC check source measurement that you can be done at whatever periodicity is necessary for your facility and moving from there and quick drink. The last application we want to take a look at as this STACK gas monitor HPGE system. This is a system we've had some success with. Basically we've got a modified 747 shield with I believe it's yeah 17 liter Marinelli beaker with appropriate plumbing to draw from a an effluent stack and ISOCS is used for the the calibration and actually I think there's some actual empirical data points that that are thrown with that ISOCS set. It's designed to continuously measure nuclides in this and the effluent flow. There's a pre filter that's used to remove particulates in iodine which could be further analyzed if necessary. This system is based on a 30% reverse electrode germanium detector GR 30. I'm not sure what the resolution is on that. I put XX there. The MCA is AI. Think the on the four that we've actually deployed there were links as before the links two came out of course that would be links two now it's got an 8 decade dynamic range and it's designed for high count rates and stack flow is input is used to convert results to effluent rate. These are the years that we developed that we actually put these out I believe at four different facilities. I know we have two that are that are deployed in the United States, one is deployed in Belgium and one is deployed in Australia. I think we've got another shot of that here. So here's the back of it with the pump and the plumbing and here is its control console. Again the DA is the the heart of this system and here we have some of the data output to to give you an idea of of what the results were. I've got an MDC chart over here. I'm for times sake, I'm not really going to get into that. But you see we're monitoring a series of Xenons here and if I my reading is correct, what we have is the facility going on a holiday break right here. So it shut down. You see decay of for example Xenon 135 here and these others with shorter half lives. When we come back from holiday, we start the facility back up and our activities make this prompt leap up here and we're back into operation. OK. This is the Horizon interface for that stack monitor. Again similar to similar to the DA dashboard, but customized many more options. And here if you look over at the meters where we had count rate and dose rate, these are stacked flow and sample flow over here. OK. And let me take a look over here, make sure I'm not missing anything up here. Is the G OK? I've got a question. Is the GPOGPIO box USB? Yes, the GPIO box is USB. The one that I showed you in the kit. Actually all of those all of those options were USB is the dashboard via web browser server running on the Dai. Don't believe that's the case. The dashboard is actually written in Adobe AIR and it's launched as an executable. I I believe the Adobe Flash issue stopped the stop the the use of of of a web server from from that location. I'm I'm not really an expert in that so if I didn't answer that question appropriately ping me and let's get a better answer. So did I understand that Genie is required to configure the osprey and the DA cannot do that. The DA cannot go in and adjust things like high voltages, gains, etcetera. You do need Genie for that and that is not really I, I it's not an issue because the detector is sitting on the network and you pull Genie up from elsewhere at any given time actually and make those adjustments. If you're running you know a sodium iodide that's stabilized, that shouldn't be too often. Again you put, you just go into the the DA dashboard, put the the instrument in, you know pause acquisition, do whatever you need to do with Genie, get out of Genie and go back in and start up the this, the the DA again for that instrument. OK. And moving on here, so in this discussion we made a distinction between the operator driven processes and the continuous spectroscopy monitoring paradigm. And what what I would invite you to do is consider the processes you have now that are operator driven. If you feel there are cases where you could actually save money, save time, save dose by switching to a ACSM based platform, I wouldn't. I would invite you to, you know, contact us, let us help you out with that if if if need be or you know, if you don't need us for that, that's fine. I've got a blurb here that sort of says what I'm saying now without reading it all out to you. So the question is, are there areas where the transition to autonomous CM would make a difference in terms of efficiency, data quality, or cost effectiveness? This evaluation will empower you to make informed decisions that align with your goals and objectives, and Miriam is always happy to assist with your evaluation and provide recommendations. We've looked at 3:00 systems. Yeah, looks like we've got a. Pole So I guess I'll I'll I'll take a moment here to. So the the poll is, do you see new use cases with this tool at your site, let that the insurance stabilize here for a second and then I've got one more slide to show you. And then we can go into Q&A. Give me an opportunity to get another drink of water here. OK. All right. For you poll responders. Thank you very much. And then the last slide is this. This is just to point out there are actually many other applications that we've implemented using the DA in this CSM platform. I'm not going to go into each of these, but you'll have the slide. You can have a look at this. If you've got questions about any of these applications or applications that you think that would be new applications at your facility, by all means contact us and we can discuss that. All right. Thank you very much. We can move into the Q and AI believe. OK, so someone's answered, can we down your download your slideshow to show people who couldn't make the webinar? Yeah I believe we're going to make this the slideshow available in the actual the webinar is going to be available as well. So if you couldn't make the webinar, it's going to be posted to the website to view as many times as you. You choose just like the all of our other webinars. So it says how many devices can connect at one time or at a time. So the answer to that is one detector per data analyst. So if you if you need multiple detectors, just get multiple data analysts, OK. The difference? What's the difference between short, Long, and dynamic workflow? OK, so in that slide at the top of the workflow bar that we looked at, one of the workflows was labeled short. One of the workflows was labeled long. Long was labeled dynamic. So that was controlled by whoever created those workflows. Short and long just simply meant that one had a shorter count time than the long one. And if I were to go back, I don't know if I can see that on that static slide. But if I were to have find the system that was actually set up on, and I'm sure that would be the case, the short one would be maybe 60 seconds, long one would be maybe 5 minutes, one hour, something like that. Now the dynamic, we didn't really get into that, but there's two types of continuous workflows you can set up. You can set up static and you can set up dynamic. Static basically is a continuous workflow that will accumulate for a fixed lifetime and then process and move forward the dynamic workflows. These are basically your your count to MDA type workflows you may have. If you have a Apex Gamma system, you'll know what I'm talking about. But basically you set that one up that by saying, OK, here's there's a minimum count time you you need to accumulate for, and here's the maximum if I'll you know if things get crazy, don't accumulate longer than this before processing. But the idea is you want the accumulation to occur until whatever new clients that you've set up in your analysis library to have their MDA be met meet that MDA. So that's the dynamic. Let's see. And let's see here. OK, I'm sorry. So I've got a question. Is Horizon support built in now or does that require special adaptations? Horizon support is built into the DA itself. I'm going to jump to a slide here. So this is one of our first slides. You'll see this built in OPCUA server for easy connection to SCADA systems. That's basically all about horizon. Now Horizon does not come with the DA and we do have I believe some standard Horizon screens that are ready to go with the DA. But typically what we've seen is the customers who are getting are are are switching to CSM and they want Horizon applied. We'll customize those screens and we can we can do those customizations. Let's see, I've got a question. Is it mandatory to have it connected to a network? Absolutely not. But you do need to have the DA connected to the MCA in the detector. So there's initial CZT cases I showed you in the first example where they were looking at let down lines. None of that stuff was on the network. Basically you had ACZT plugged in by USB, excuse me, USB into the DA and then you had the the the GR ones plugged in by USB into the DA and I think that's it. That's that's all that was required and other of course power and stuff like that. No network or at least not required. I can't promise you 100% they did not have some type of network configuration but not required. Is the DA compatible with older systems like the DSA 2000? No, it's going to be one of those Mcas I listed in the presentation and specifically that's going to be the links to A links, If you've got A links, the Osprey and the the GR1 or GR1 plus. GR1 and GR1 plus have their own little built in MCA. So that's what we pointed out. So let's see. I think I mentioned I was going to have to get back to that one and I will. OK, here's the here's the question I already answered about the data sample time higher resolution, less than a second. I think I mentioned that we got that down to think it was 250 milliseconds depending on the MCA. So you're so you're kind of pushing it down there you got this one, got that one. OK here's a here's a question that it's a little bit hard to answer generically but I'll give it a shot. I'm the question is for in situ measurements using DA and horizon. Is background an issue? Particularly right on and how we deal with it? There is background there. There's a capability to take background measurements and use that for background subtraction in the analysis, but it's going to be case by case basis and it's somewhat difficult to answer just one answer responses for radon because radon itself can fluctuate. Again, if you I would only be worried about radon and daughters if there are lines within the that series that are interfering with something I'm actually looking for. OK. I've got a question about the UPS. I'm asked to discuss the UPS that I showed earlier in a slide and let me find that slide. Here's the slide right here. OK, talking about this portable UPS. Basically this is a little device really not much bigger than the actually I think overall it's smaller but thicker than the DA itself. You charge it up and it can run this DA in lack of any other power source for about two hours. And I don't think it gives you any type of. I'm getting low on power indication. So there's an OPCUA installation question. I'm going to defer that to look up. I am certainly not an expert on that and then I don't want to, I don't want to speak out of turn there, but we have ample experts in that area. So let's make sure I have that question in a proper amount of detail so we can get an answer to that. All right. And Leland Davis, I will get back to you. I need to sort through all that and make sure I don't say something silly in front of hundreds of people. So we'll get back to you on that. I'll read the question for the benefit of others and if you if somebody else also wants the answer, send me an e-mail. I don't have the answer yet. So if you're using an HPG at a sample site, cryological temperatures would need to be maintained at the site. Also, how do you account for varying geometry found out in the field? OK, maybe I can take a stab at some of this. I let's see when I worked in Ohio, the NDA non instructive analysis team. We're often way off in determining the assay of residues in pipes. In the X 326 building, very different numbers were obtained when samples were brought into the lab or boroscopy. I'm not sure if I'm saying that right was performed OK, so chronological temperatures. That's probably the the least answer I'm going to provide. As far as chronological temperatures, we do have devices like the Aegis that are, you know that they can maintain those temperatures. If this is if you're talking about something that would be a a permanent or quasi permanent in situ implementation like I was sort of discussing, then I might want to go with something like ACP 5 Plus or something like that, assuming that the environment could handle that. But short of that, something that you brought in more portable like an Aegis might be more appropriate. Varying geometry. The way I count for varying geometry is I SOCS. There are techniques to adjust I SOCS models using the data within the measurement itself to to verify whether you have things right or wrong. Let's see, Look at that again. So how do you account for varying geometry found in the field? So that also LEDs me to think of the actual ice ox model itself. So if it's something like a pipe, the pipe's the pipe, you know you're probably not going to mess that geometry up. It's more like the the the accumulation of materials within the pipe. You may not know the the specific geometry and that's where I was alluding to. There are various techniques to use internal lines and line ratios and what have you to optimize that ice ox model. I think as far as that scenario, ice ox is is quite useful because you do have that flexibility. Let's see. And you know what? I even though I've answered that question, I will take another look at it, Mr. Davis, and type some of that. I don't type something up and get back to you. Make sure I've fleshed that out. Well, it it does win the most complex question, the Today prize. I will give you that. Lots of stuff. Lots of stuff going on there. All right. I don't have anything else. Unless somebody has a last minute. Yeah. So I don't have anything else. I I thank you very much for everybody's participation. If you've got any questions, please contact us. You can contact me if it was something in particular. I mentioned that you'd like clarification on that. We we didn't quite cover if you think of it as an afterthought, but yeah, that's pretty much it. I went over a little bit apologize, but hopefully it was good for everyone. Thanks a lot. _1734834693150