Hello and welcome to the webinar. My name is Merrill Mann and I'm a clinical medical physicist at Sun Nuclear, the Marion Medical Company. It is my pleasure to introduce my colleague, Doctor Mark Chauffeur, one of them our Medical Physicists and Research Associates. Doctor Chauffeur received his doctorate in Medical Physics from the University of Oldenburg in 2011. After eight years of working as both clinical medical physicist and lead researcher in the working group of Medical Radiation Physics at PS Hospital, Oldenburg, he moved to Clinicum Layer, Germany in 2015 to work as a clinical medical physicist and radiation safety officer before joining Sun Nuclear in 2022. Before we begin the presentation, Please note that all attendees are muted. We encourage questions. You can enter those at any time in the questions box on the left side of the screen and we will address them at the end of the presentation. A recording of today's presentation will be sent out after the broadcast. Before I turn it over to Mark, I would like to ask a couple of poll questions from those of you attending. First is if you were doing stereotactic treatments, what are you currently using for patient specific QA? I'm going to go ahead and give you about 30 seconds to start responding. The responses are a non SRS specific measurement array like an arc check or a map check film at the panel, an SRS specific measurement array like an SRS map check or other. So I'll give you a little bit of time to start submitting those questions. We'll probably give you guys another 5 to 10 seconds. OK. And so it looks like a bunch of you are using As for a specific measurement array, 43 1/2% and then others are using an arc check or map check or something similar followed by EPID film and other. Our last question is, are you currently performing single ISO center, multiple target treatments? I know that they've become really popular in the last number of years instead of those whole brain cases. So we're just wondering if you have moved into that realm of single ISO, multi target treatments and once again, I'll give you probably another 2030 seconds to get those answers in. OK, about 10 more seconds and then we can look at the results. OK. It looks like a lot of people are using or are doing these single ISO multi target cases. So I think you're going to be really excited about what Mark has to tell us today. So with that, Mark, I'll turn it over to you. Greetings everybody. Thank you very much, Muriel for the nice introduction and I'm happy to be giving this talk today and presenting our research work on a paper entitled The Systematic Evaluation of Spatial Resolution and Gamma Criteria for Quality Assurance with Detector Arrays in Stereotactic Radio Surgery. So I must start off with this talk by acknowledging those who participated in this work. Actually, the project is part of two Master's thesis which were performed in our research and outreach work at Sun Nuclear. So the Master's thesis of Ann Katrin Stedem at the Hein Heinrich University in Dusseldorf, where she did a lot of the signal theory analysis of SRS fields. And the master's thesis of Mark Totti at the National University of Ireland, where he did a lot of the treatment planning and plan modifications to introduce known errors into the plans and also the verification of the plans with the different commercial detectors. So the paper has now published since February in the Journal of Applied Clinical and Medical Physics and I would like to encourage everyone to read this paper and to try and get a look into what we tried to push out as a message, which I'm going to get into during this talk. So before I start, I must mention that patient specific quality assurance requires the use of detectors and quality metrics which are appropriate for performing the specific task at hand. That means assuring the quality of the plans before the delivery during the treatment with the patients in place. So with the increased use of techniques involving high doses per fraction, it is thereby necessary to detect any errors that could result in a potentially relevant deviation from the from the intended clinical treatment goals. As such, it's important to evaluate the quality the QA devices which we use in the clinics for other techniques such as IMRT or 3D CRT and check if it's plausible to use these devices for SRSQA. So given this background, we intended or we investigated commercial detector systems including the SRS mark check, the arc check both from sun nuclear and the portal imaging device using portal, the EPIC using portal dose. And we sought to add to answer the following fundamental questions. First of all, what detector spacing is required for an accurate reconstruction of stereotactic radiotherapy dose profiles? Next, what gamma criteria and detection thresholds would catch clinically relevant MLC errors given that this is one of the major goals of PSQA. Thirdly, those an artificial doubling of detector density necessarily increase error. So the idea here was in order to cover your entire dose field totally, you can do a detector shift to acquire a high density dose map. So, but does this increase detectability of errors? And finally we tried to make a connection between our signal theory analysis in .1 by determining the detector spacing and linking that to the detectability of MLC errors. I was mentioned before I proceed that we also investigated 1/3 a fourth device, my QA SRS from IBA dosimetry, but unfortunately couldn't introduce or include this the results of that detector system in the publication due to a couple of of mishaps with the handling of the device, which I could go into later. But to the first question, what detector spacing is required for accurate reconstruction of stereotactic radio therapy dose profiles? The background for this question is we need to fully cover our dose fields with our detector systems. If we are performing QA. That means during the QA process, we want to fully reconstruct our dose profiles and it is typical in SRS deliveries that high dose perfections are delivered. And also additional to this fact, the dose gradients at the edge of the fields are quite steep. That means if we do not properly catch any potential errors, this could translate into a severe clinical effect on the side of the patient. In order to answer this question on detector spacings required, we investigated small static square fields which are typical in stereotactic radiation plants as well as two clinical SRS plants. Treatment planning was done with an on the eclipse TPS for six MV. Flattening fields are free photons and we investigated two different plants, a small PTV of total volume 3.2 CM cube labeled here Plan 3CC comprised of two legends and was intended to be treated in one fraction of 20 Gray. The second plan was a single legend of size 35 CM cube unlabeled Plan 35CC, intended for treatment in nine fractions, in three fractions of nine Gray per fraction. So in both plans two half arcs and two full arcs were planned and this were non Co planner deliveries. So here on the right side you can see the dose maps for both plans 3CC and plan 35CC. And below you can see the steep dose profiles obtains with the non Co planner delivery and measured using the SRS map check just to display and show how does those profiles look like. So as I mentioned it is very very important to check if the dosimetric devices which we intend to use in our quality assurance are capable of reproducing this dose maps. So in order to do this, a very elegant method to use is the Fourier analysis. So what happens here or what we did is we obtained lateral scans through both those both of the radiation plans as well as long longitudinal scans and performed the Fast Fourier transform. So here on the next slide we have the Fast Fourier Transform of those those profiles which I showed previously as well as on the small square fields which I also showed and you see here the normalized amplitude Spectra for this different profiles. Typically what the Fourier transform does is it gives a list of the frequency components which are found within the scan profile, so to speak. Steep those gradients would be represented by low frequency components and flat those components would or profiles would be represented here by larger frequency components. So typically we determine the maximum frequency component of the Fourier transform Spectra here, and this is known as the Nyquist frequency, which represents 1/2 the required sampling rates for completely reconstructing the dose profile. As a result. For instance, a value of 0.3 per minute 3/4 per millimeter would imply a required sampling rate or detector spacing of 1.47mm. So this gives us the minimum requirement of a detector spacing, which you would need to be able to measure the doors gradient at the field border of the five by 5mm square field. So here on the right hand side, just for detect. For didactic purposes I'm showing here the field sizes against the detector spacings derived using the Fourier analysis and also projecting the derived detector spacing for the scandals profiles for our SRS plans, just to give an impression of what kind of field sizes this could correspond to. So alternatively to check what something rates would be required to fully reconstruct a dose profile, we used the dose profile from the TPS and down sample this dose profile comparing against a reference each time. So if you down sample a given profile you could reach, you could find the the the threshold at which you you under sample that dose profile. So what we did is we created a high resolution those profile from our profile calculated on a 1 by 1 grid. So the reference those profile was 0.1 millimeter high resolution profile. And we iteratively reduced the sampling of the test profiles each time by 1mm and performed A gamma analysis using the one by one the one percent 1mm gamma index. So after doing this, each time the TPS dose profile was down and sampled, A gamma analysis was compared against the high resolution dose profile. And you can see here the limits where this gamma passing rate falls beneath the tolerance level of 95%. So the 95% tolerance level for instance was used here according to the recommendations from the TG 218 as well as shown here is the 90% limit which is the action limit. In case the QA falls below this limit then you need to troubleshoot SO also shown in this plot data points corresponding to the detector spacings for the detectors investigated in this work for the SRS MAP check and for the ACT check. So in the standard configuration, the SRS map check has a detector spacing of 2.47 millimeters, and in a high density configuration that means measuring once again by shifting the detector by 1/2 of the entire detector by 1/2 of the spacing of the detector elements. So you can see here from this plots that down something that those profile would also lead to a fall in the gamma passing rate. And that's that's the scenario which you would have for instance in a lower resolution detector system such as the arc check. So this just gives the deductive view of the usability of these different detectors for the verification of this those profiles. And as you can see despite the high resolution or the high density version of the object, this would not be recommended for the verification of SRS plans. So to the next question, what kind of criteria and detection thresholds would catch clinically relevant MLC errors? So this was performed by introducing artificially MLC errors in Python, so exporting the plants from the TPS and introducing the MLC errors by closing the entire leaf bank either by 0.51 and 2mm, reimporting into the TPS and recalculating the plants in order to show the clinical relevance of these errors, which is very important since that's the outcome which we are trying to investigate. We also introduced a second structure to show for instance the dose fall off outside the PTV. So this was A2 Minimetal margin surrounding the PTV and we labeled the structure PTV 2mm and of course this also shows that those which you could find to the at the neighboring organs at risk and this is very crucial for instance when you're doing SRS stereotactic radio radio surgery deliveries. So in order to assess the dose fall off and also the effects of these MSC changes, it's important to visualize the dose volume histograms. And here in the figure 2 you can see those volume histograms for PLAN 3CC on the left hand side and Plan 35 CC showing the effects of the 2mm change on the PTV as well as on the surrounding structure PTV 2mm. For a more quantitative look at this, it's very important first of all to define. What are clinically relevant errors? So we reference here the definitions from the ICRU 83, where a clinically relevant error is one in which the mean dose deviates by more than 3.5%. And for the TG100 of the AAPM, if a dose change of five to 10% is incurred then this is the considered clinically relevant. We also consider the effects of under dosage within the PTV as well as the increase of the dose to the surrounding structure PTV 2mm. So what we realized here was that the clinically relevant errors or the effects of this MRC changes are more severe for smaller PT VS. So this is of course known and as you can see here in the tables are the different quality metrics from the PTV, the 98% coverage, the minimum dose to PTV, the mean and the maximum dose, as well as the mean and the maximum dose to the surrounding structure. As you can see, for Plan 3 CC, just the 1mm MSC shift already leads to 13.3% on the dosage of the PTV, so we could consider the 1mm MLC change already as clinically relevant for Plan 3CC. For Plan 35 CC, we have a significant change in these metrics for the 2mm MLC shift with an under dosage of 2524% in the PTV. The mean dose of the PTV wasn't affected affected significantly, but with both cases we can see here a significant increase of the maximum dose to the surrounding structures. So I mentioned earlier that both plants were planned with two half arcs and two full arcs, meaning that we could now compress the entire cohort or the entire data and try to find the gamma metrics which could catch these errors. So in order to do this, we represented here. This is a simple representation of the data using box plots, which of course show the median as well as the ATF and the twenty 20th percentiles of the doors. We also use the different combinations of those difference and distance agreements as shown here for both the plan 3CC and 35CC and for all the investigator detector systems. As you can see from this plot, for instance on the top left for the SRS map check, the 1% one meta metric already falls below our 90% action limit, meaning that without an MLC error we already have a very low gamma passing rate, which is an indication that this metric is very strict to be used for PSQA. So we are actually looking for a metric which does not fall below the 95% tolerance limit where there's no error, but then falls below that tolerance limit once an error is introduced. So using this concept we now decided to arrange the data in a more conducive and a more sensible way so that we can easily derive the the metric combinations which could detect the errors for both plans. So the idea here was to continue using the median gamma passing rate and summarizing the results here in this colored maps. As you can see here for the example of the SRS map, check for plan 3CC on the left and plan 35CC on the right. So a suitable gamma metric would be one that maximizes the change in passing rate if we go from the condition of 0 MLC error to the condition of the clinically relevant 2mm MLC error, and using this notation we can define a sensitivity metric. So we can call this the sensitivity to to to catch the 2mm error would be delta PR 2mm. So the change across the table for the metric which can detect any introduced error. So using this notation we realized that using the two percent 1mm metric this detects already the 0.5mm MSC error in both plan 3CC and 35CC and our sensitivity metric gives the value. So if we take the difference between the 0 gamma passing rate and the 2mm MSC error passing rate, this gives us sensitivity of 46.2% for plan 33 CC and 41.6% for plan 35 CC. So we did this as well for the other detector device types and we realized that the sensitivity change for the metric which detected the errors was much lower for the object and also even much lower for portal those using the EPID. So I must mention here that it is very important or one of one of the underlying reasons why this arrangement was very reasonable to to derive the metrics from these tables was that for high resolution detector systems it's advisable to keep the distance to agreement fixed and vary the those difference. Because for high resolution detectors the those difference plays an even greater role than a change in the distance to agreement. Using this idea, if you rearrange this table fixing the dose difference for this low resolution detector and varying the distance or agreement, you could easily visualize the trends which we can see clearly for the high resolution detectors. But for the purpose of for convenience purposes and to keep everything unanimous, we decided to keep this configuration for all the detector types. So this was an interesting insight which we found during this study. So we have seen that this SRS MAP check system is quite sensitive to a change in the the the introduced 2mm MSC error. But now this begs the question, does an artificial doubling of the detector density necessarily increase the error detectability? So increasing the detector density was performed, as I mentioned, by doing the measurements once more and in the second time by shifting the entire device by 1/2 the detector spacing. So we have now double the amount of data. So what happens here is if we stick to, we realize that this does not change the gamma metric which detects the 0.5mm error, but rather the passing rates for each scenario are much higher, as you can see. And we visualize this even better if we include another shade of Gray with a 75% passing rate. And in this case you can see that by increasing or doubling the density of the data points or the measuring points, the sensitivity metric dropped to 36%. So this answers the question. So doubling the detector density needs to increase gamma passing rates as well as a drop in the sensitivity to the introduced MLC errors from 46.2 to 36%. So if we keep doubling this detector density, then eventually you would need even stricter gamma metrics to be able to detect the introduced MLC errors. Oh, to the final question. We now attempted to make a link between our signal theory derived detector spacings for SRS plans and the detectability of MRC errors for array dose matters. So how does the Nyquist theory relate to the detection of MRC errors for array dose matters? So lucky enough we found a publication in 2023 from James ET al. Who investigated lagging leaf errors as well as stock leaf errors using various commercially available Director array systems. So we were happy to see that they also investigated the My QA SRS and they used the different combinations of those difference and distance or agreements, including the 2% one meta metric which was most appropriate for the SRS map check as determined in this work. So we went on to use this data or their results as they also published the change in the gamma passing rates from 0 MLC error to 2MM MLC error MLC error exactly as we did. So this plot now summarizes this work showing the detector resolution of the different devices which were investigated in our study. So our data points are shown here by the field symbols and the data points from James ET al. Are shown by the open symbols. So we are plotting here the detector resolutions for the detector systems as well as our derived detector spacing using the Nyquist theory against the change in the gamma passing rate which is our sensitivity metric to the introduced MSC errors. In this case the sensitivity metric to the introduced 2mm MSC errors for the two percent 1mm gamma metric. So it's interesting to see here that the SRS map check, for instance, falls within the boundaries of the desired resolution. To be able to verify SRS plans, I must also note that for the open symbols from James ET al, the upper symbols are for the stock MLCS. So the stock MLCS is analogous to the open MLC banks which we investigated in our study. And as you can see here for instance for the ACT check that is within the uncertainty limits quite a similarity between the change in the gamma passing rates between the results from James ET al. And the results which we found in our work, which gives us the confidence that we can actually use these other data points to infer the sensitivity which we would expect from the other detectors which we were not able to investigate in our work. So as you can see from the sensitivity of the EPID which we derived here was about 5% and this interestingly lies around the published values of the for the my QA SRS from the work of James ET al. So this gives us gives us some more intuition on the use of such high resolution detector systems for catching introduced errors. That means while increasing the amount of of sampling points, we somehow artificially could potentially mask clinically relevant errors. So this is a very important message which we tried we found from from this work and also interesting. They published results on the EBTXD radio graphic film and as you can see the upper data point is the one which we can compare against our results has even a lower sensitivity to this introduced zooming meta era which they published in their work and this this gives us some more insight on the use of gamma metrics. So the question is, are the gamma? Is the the use of gamma metric always the best choice of as a quality metric when when when checking our plans? Or could you use for such high resolution detectors just those difference so that still remains open. So now after this coming to the summary of all the results from this work we did signal theory analysis and the Nyquist analysis reveals that a required detector spacing between 1.5 and 4.5mm is required for verifying SRS deliveries. We also found using the median values on our Gray maps that the 2% one metal metric was most sensitive to the MSC errors. For the SRS map check we used the 95% tolerance level and global gamma with a threshold of 10. We also showed that the detectoral sensitivity to the 2mm MSC errors was highest for the SRS map check with a value of 46.2 compared to the other detector systems which we investigated the arch check with 12.2% and portal those with just 7%. We also found that doubling the detector density would lead to an increased gamma passing rate due to more data points in error free locations and an artificially increased gamma pass in regions where no MLC error is found. So this leads to a reduced sensitivity to the MLC errors by about 10% for the SRS MAP check. And if you keep on increasing the density or the the resolution of your detector system, you would have to adjust to stricter gamma metrics in order to catch the same MLC errors. So finally we attempted A correlation between the change in the gamma passing rates, which is our sensitivity metric here, and the detector resolution. And there are indications of a lower sensitivity to MRC errors using the EBT EXA field and my QSRS compared to the SRS map check when using the gamma metric. And we must note that the sensitivity to MRC errors depends on of course the underlying detection mechanisms, of course of the dosimetric systems, and they may also be a dependence on the delivery technique as well as treatment units. And we must mention that the results we are showing here are specific to the units which we investigated, but could also be generalized since we included results as well from James ET al. So there's there's definitely a need for further research to complement these results. And we would like to encourage anyone who finds interest in pursuing investigations either repeating these measurements with the same detectors we need or including other detectors such as mercury, SRS or films or other high resolution detectors or investigating what our effects. Because technical components of the Linux such as the collimator or the gantry or the treatment couch would also lead to potentially relevant clinical effects. So you could extend this work there and if you find interest in partnering with us in doing these investigations, you're very welcome to to reach out to us. So thank you very much. That's great. Thank you, Doctor. There have been a couple questions. As a reminder, if you would like to ask Mark a question, please put that in I think the left hand side in the Q&A section. And we can start with a couple of the questions that have been asked so far. The question has been asked, you looked at two different SRS deliveries. Are there any plans to expand the work to look at other types of errors? Very, very good question. There's there's a really a large cohort of error types which could be investigated. So as I mentioned, you could also. Introduce slide country errors and this. This will be quite interesting if you do single ISO center multi met treatments. It'll be very interesting to investigate the effects of slide, let's say slide tabletop misalignments or slides, collimator misalignments, because these have very serious effects on on on lessons which are not found in the ISO center. So we do not intend to extend this, this work in that direction, but we will be interested if we have a clinical partner who would want to investigate this. Since we have this experience from this first study, we can guide you on exactly what you can investigate and using the methods which we showed in this work you can, I think you can really successfully perform this other investigations. So it'll be definitely very interesting. Thank you. One more question, can you go into more detail about you. You made a comment that the IBA array you tried to use it but for reasons in your studies you were unable to use it. Can you go into some of those issues or concerns that you saw with that newer array? Definitely it. It all starts with the underlying detection detector components of the my QSRS. It is a CMOS based detector, so actually detector detector detection system which is found typically in in camera. So what we did was since we're we're investigating different variants of our SRS plan, for instance the three CC plan was intended for 20 Gray per fraction. So you can imagine measuring 4 variants of that plan means 80 Gray. So after delivering 80 Gray onto the detector, a ghost image was impregnated on onto the detection onto the detector. So we needed to renormalize out this background image in order to perform subsequent measurements. And since we needed so in such a measurement you definitely need large number of of data points or data or gamma passing points to be able to to draw the the conclusions which we we made from this study. So we intended to repeat this measurements and acquire separate arcs, separate measurement points for individual arcs. But we made a a subsequent issue because of the applicability of the angular correction factors which unfortunately presented in in a tabular form. So you have different separate tables for the angular correction factors and our corporation partner who came over with the the laptop and installed software to perform the measurements with us. We all face the problem of reliably repeating the measurements using those provided tables with angular correction factors. So this were a few a few nuances which did not permit us to to successfully collect data which we could compress and summarize in this work. As you can imagine, we performed non Co planner deliveries and this actually makes things difficult if you're performing non Co planner deliveries and you'd have to you'd have to attach a gantry mount onto the in order to to to let the device rotate and and so this, this, this with a few nuances which we faced which did not permit us to to include this results. We finally couldn't acquire all the data we needed for for this high dose perfaction plants. Thank you for sharing your experience with that. Next question is, do you feel the size of the SRS MAP check array is sufficient for stereotactic measurements, especially single ISO multi targets where they can extend a fair range into the brain. Yes, yes, I I think, I think the size of the SRS map check with 7.7 by 7.7 detection surface is sufficient to cover the majority of sites in the brain. There's also the possibility to use the the positioning tool in the software to include shifts to other to other sites or to to other locations where lesions could be found and this way perform the measurements or the the entire patient specific QA so it fits sufficiently the size the size of of the brain and with the help of the shifts which you can apply you can cover the multi targets. Great. Thank you for that answer. I personally have a question for you. After looking at your data from the poll that we took at the beginning of the presentation, it showed that a lot of people are still using a non stereotactic array for their QA. Having been in the clinic and fought for funds to be able to purchase something like that, I wonder if you can. I noticed that kind of one of your last slides was talking about the detector sensitivity for the SRS map check compared to the arc check and then portal dosimetry. Do you have any guidance or advice if somebody is trying to get a stereotactic array when it comes to the tools that they do have at hand, what you know, what tolerances were impactful using that arc check? Or, you know, if given the choice between arc check and portal dosimetry for stereotactic measurements, what would you do as a clinician if those were the tools that you had in your clinic? Thank you. Very, very interesting and challenging question because going into a treatment technique such as SRS without having the required equipment is I think not the right step to do so during the planning process and the decision making to implement advanced techniques such as theotactic radio surgery with high dose perfection and a high dose fall off outside the treatment area into areas where you could have sensitive organs at risk, especially in the brain. You definitely need a high resolution detector which you would use to verify these dose and catch such errors which we showed. What I realized is this work shows it shows the introduced errors on a static setup. So you can imagine that during patient treatment a slight movement would increase the the the clinical effects of the errors in quadrature. So that means the error which the MLC, let's say a lagging MLC introduced would add to other errors which you could find during the treatment itself. So if in the patient specific QA you don't catch those errors on your machine, then you may be having a very severe issue during during the treatment. And this is not the only issue, you also have the gantry the the collimator and the couch errors which could could also pose an issue. So if I had the app check, I definitely would double check measurements with the a high resolution detector system. Maybe introduce film, because we definitely know that one thing, one thing is covering your entire those map as I showed during the first question was the required detector resolution. So the film already checks that box. So if you have a good radio radiographic film system and you know how to use this, then it will be advisable to use that in place of a lower resolution detector. So that's safe. But then you have the nuances of film dosimetry, the processing time, the handling and also the waiting time, so you don't have your results immediately and it's quite cumbersome to use. So detector systems with quick and immediate reach readout such as the SRS map check is very ingenious solution. And I also found it quite interesting that the resolution of the SRS map check just falls within the desired resolution or spatial resolution derived using the Nyquist frequency. So I would advise anyone who wants to introduce SRS treatment modalities to get high resolution detectors such as the SRS MAP check. It was. It's a really great product until today. I asked myself was it just intuition or just ingenuity that the the detector system was designed the way it was designed? Because it really ticks all all the boxes you need for for a tool to be used As for the QA. Great. Thanks so much, Mark. I appreciate that. That was a great answer. With that, I think that's all the questions that we're going to have time for today. I really appreciate your time and just want to say thank you to everybody that came to listen today. And once again, Doctor Shofar's publication is out currently. If you would like to go and read more about his research. Thank you. Thank you. Very much. _1728256636450