OK, so hello everybody and thanks very much for joining us for today's webinar where we're going to be discussing how to accelerate radar research with an open architecture. My name is Jeremy tweets and I'll be moderating the webinar today. And so my my role here at NI is as a solutions marketer focused on focused on radar, EW, communications and navigation applications within aerospace, defense and government. And I'm joined today by my colleague Philip Henson, who focuses on the same applications as me and and who works as an offering manager and also within our aerospace, defense and government business unit. And sleep is the presenter of the webinar today and he recorded the session and the demos already, and both of us are going to be available to answer any of your questions, both as the Web and RA Boston Web and R is running, but also will go live to answer questions after the webinar finishes too. And. Before we begin, we just wanted to cover a few housekeeping items. At the bottom of your screen there are a number of engagement tools that you can use. You can feel free to move them around to get the most out of your desktop space. And you'll find a copy of today's slide deck and additional materials in the resource list. And you can download any resources or links that you may find useful. For the best viewing experience, it's recommended that you use a wired Internet connection and close any unnecessary browser tabs or programs that are running in the background. For the best audio quality and please make sure your computer speakers or headset are turned on and the volume is up so that you can hear Philip when he's presenting. And and you can find additional answers to some of the more common technical questions located in the help engagement tool at the bottom of your screen. The webinar will also be available as an on demand version approximately one day after we're done, so we'll give you an email. UM, once it's ready and with a link to where you can actually access that on demand webinar. And lastly, please do feel free to submit any questions that you've got up during the the session through the Q&A tour. Will answer some of those as we go through and then also any other questions within the live Q&A at the end. So with that, I think we're ready to go, so I do hope you enjoy the webinar and don't hesitate to ask your questions. Hi everybody, my name is Phil Pinson. I'm an offering manager here at and I today I want to talk about how you can use any platform to accelerate your radar and EW research utilizing open architectures. As we look at today's electromagnetic battlefield, you quickly see this space is becoming increasingly contested and congested not only from threat and threat jamming emitters, which are intentionally trying to disrupt communications or radar or EW activity, but also from friendly and allied forces that are operating in adjacent in near frequency ranges. What this means is that not only does your system have to handle this congested environment, but it also means that it needs to be ready to handle the next environment. As new systems come online. This is a challenge that is added on top of just making your radar EW system work as intended, and to mitigate some of these challenges we're having to add additional functionality into those radar and EW systems, whether it be electronically scanned, arrays which are requiring element level control and distributed DSP or it's adding in communication and EW functionality onto an existing radar platform so that we can minimize our footprint on the battlefield additionally, then. To be aware and to make adjustments as as our environment change cognitive and adaptive techniques are being required to be implemented within these radar systems. What that requires is headed juniors compute architectures, which means not only are you using FPGA or ASIC for radar processing, which are now also having to add GPU elements as well. Finally, we also want to incorporate electronic protect and potentially electronic attack capabilities. Ultimately Dr causing us to have a much more complex but capable radar or EW. System. Now to meet those design challenges, we're moving towards digital digital engineering techniques and what that really means is that we're starting in purely modeling and simulation environments, whether it be in C or MATLAB or Python And emulating not just the radar, but the environment as well, then quickly moving those to a laboratory test bed where you can see how those algorithms work with real world signals before moving quickly into integration labs with rate with subsystems of those components, and then finally to flight hardware. Now, if you caught my colleague Haydons webinar about 1/4 ago, you saw us talk about our radar validation offerings here on the right side of this chart. Today I want to talk about the upper left hand corner of this chart and talk about how our newest offering helps you get from the modeling and simulation world into the lab testbed as quickly and easily as possible. Now people have been using an ice hardware to support radar and EW research for quite awhile now. Whether it be on our integrated low swap, US RP platform that supports open source tool flows and it integrated RF and digital baseband back in or on our high performance specs, I chassis which supports instrument grade hardware and provides additional flexibility to add and remove components, whether it's FPGA's or RF resources as needed. Today I though I want to talk about specifically our new offering on the US RP side of the House. N eyes open architecture for radar and EW is comprised of a hard hardware infrastructure supported by an open source software tool flow finally supported with documentation that gives you expected performance as well as all the required information to get up and started quickly as possible. Really making this a really giving you access to this heart to this capability as quickly as possible. Now to start with the hardware. This offerings built on built up on. I have two U S R P1 in 320 and in 321 besides beside me. This system is scalable from just what you see here, which supports 4 transmit and receive channels all the way up to 32 channels using 16 of these devices. They can support 22 transmit receive channels each and can be centered anywhere from 3 megahertz to six giga Hertz and per device support, 200 megahertz of instantaneous bandwidth. Reference clock in PPS signals are aligned using our octo clock from and I and then finally built using third party server. In our case a Supermicro server were able to stream all of that data back to four Intel 71107 Ten Nicks providing 1610 Gigabit links to get all of that data from those USR peas. Now the hardware is something we've provided for quite awhile, as well as RUHDRUUS RP hardware driver that allows you to control and access each of those usurps. But combining those with the native Linux drivers, we've been able to provide customers now and open source reference software that gives you C++ libraries plus out of the box working examples so that you can have a fully phase coherent multi channel system as soon as you plug this hardware. Together and like I said, we've scaled that everywhere from 32 channels down to four channels, really depending on what your research needs are. What that enables you to do is really deliver on this goal of moving for modeling and simulation quickly into the laboratory test bed by leveraging open source tools and open source software, we don't. We don't dictate which software pattern or software environment you need to work in. Rather, we provide you an infrastructure that allows you to integrate it into your own software environment, ultimately allowing you to move from simulation to testbed to tactical hardware much quicker than before and ultimately getting new capabilities to our soldiers faster. Now specifically, as you look into what you can do on this hardware because we have usurps with Fpgas and line that means per element you can now do things like beamforming or receive pulse compression at the hardware at the hardware level in real time and then stream all of that data back to a server where you then have expansion capabilities to add additional functionality such as GPU's to enable you to do AI or machine learning techniques with all of this data that you just received from the US RP. Now. What I'm gonna show you quickly is the actual system that we did. All of our testing on. You can see there are 16 usurps providing 32 transmit and receive channels. All of that data stream to our Supermicro server that you can see here. Now we built this system internally A to develop the hardware on, but be because we wanted to be able to tell you what type of performance you could potentially expect on this system. And so while we started out with a goal of one, one degree of phase difference between channels, what we actually were able to achieve and what we actually measured was less than 1/10 of a degree phase difference between channels, not only within a single US RP but between channels across multiple devices. Now synchronization is 1/2 of the value of this architecture. The other half is streaming all of that data back to a server for you to do to do your own work on and this architecture we've been able to show between 8 and 32 channels 122 down to 50 mega samples a second, which translates to roughly 100 megahertz to about 40 megahertz of instantaneous bandwidth. While we're very pleased with the current performance of the system, we fully expect, as we implement the next, the latest version of dark in the base U S U HD drivers will actually be able to see those streaming rates increase significantly. Now what I'd like to do is show you a demo 2 demos of this system and I split this up intentionally so I can highlight one the ease of tool flow between MATLAB and our reference architecture and then two to show the phase coherent operation of the system. So you'll see a range Doppler radar as well as an angle of arrival demonstration. What you have here is a machine that I've remoted into in one of our labs, and the reason I've done this because for this demo we're utilizing our in I vector signal transceiver to operate as a radar target generator. That's this screen you see here on the upper right, and that's the system that's going to take these generated pulses from our usurps and actually do the Doppler. Do the frequency shifting, copying of those signals, and delaying so that we can create targets in space. We also have a server running MATLAB which then is going to call this reference architecture which is utilizing the exact same hardware that I have here on the left to generate a stream of pulses and then receive those, then passing all of that data into MATLAB to use Matlab's own radar processing toolbox. Really to prove this to show that this can be done with really any algorithms that you're already working on that you already have. This is a fairly simple demo, but I think it proves the point of what you're able to do. Now to demonstrate this, I'm going to start by changing the range on the first target to 1000 meters and. 100 meters per second. Then the second target I'll move. To 2000 meters and 200 meters per second. Then will clean up any previous runs on this system. Will generate a new record and when I hit this what I'm doing is I'm calling the open architecture for radar and EW reference architecture directly as we provide those examples. We haven't made any changes to the code or any changes under the hood so that this is exactly what you would be able to do. As soon as you have your system up and running as well. Now, if I read the, if I read those results into MATLAB, you can see here on the upper upper left target that I do find a target here at 1000 meters and 100 meters per second and a second target at 2000 meters and 250 meters per second. Now I'm going to run that demonstration again with a couple of different target values. First, we'll take the first target and we'll move it to. Let's do 4000 meters at. 250 meters per second. And then we'll move the second target. Now back to 1500 meters. And. Down to 150 meters per second. Then clean up the last run. Generate a new record. And then read that into MATLAB. For you now will see those two targets move this space. We now have a target at 4000 meters and 250 meters per second and a target of 1500 meters and about 150 meters per second. And this highlights how quickly you're able to use existing MATLAB code or MATLAB processing algorithms and immediately start using our reference architecture to generate real-world signals to use against those algorithms. Now we're going to switch to our angle of arrival demonstration. Now for our angle of arrival demonstration, I'm going to use these two USR piece to my left. I have an N320 and in 321 that are connected to a single transmit antenna and four receive antennas set in a linear array. What I'm going to do is I'm going to use a continuous wave and transmit from the single antenna and then receive across each independent channel. Then using the MATLAB phased array toolbox, calculate the phase difference between each element, which I then can use to estimate the direct pointing angle. Of that antenna. We'll start with the system here system here, and I'll go to what I believe will probably be about 70 degrees. I'll generate a new record. Then I read that into Matlab and you can see an estimated pointing angle of right at 63 degrees. Now if I move that into about straight on, so I expect somewhere between five and negative 5 degrees. I'll generate a new record again. And then read that into Matlab and you can see where right at three degrees. And this concludes our angle of arrival demonstration. The open architecture for radar and EW research. I hope from this demonstration you are able to see two things, a the ease of use between moving your SIM simulation into real world signals at using this aspect, as well as being able to see the phase coherent transmit and receive capability of this test bed that's available out of the box. Now, if you want to know more about tennis general, radar prototyping or validation capabilities, please visit ni.com and look at R NI Radar Solutions page. If you want to know more specifically about this open architecture for radar and EW that I've been talking about, you can find the solutions briefonni.com that walks through the capabilities and how can we expect customers utilize this reference architecture in their workflows? Do you want to know details about the reference architecture? Please find the user manual either on the KBS. Or in the GitHub, where you can find this reference architecture and there you can find the specifics of how our software is put together. The specific hardware that we use, including cabling and and set up architectures as well as the documented performance that we were able to measure on this system. Finally, if you have additional questions I haven't haven't answered, please feel free to reach out to in contact us and would be happy to answer those questions or talk to you more about any of these solutions you've seen today. Now, if you have specific questions that we can answer today I ask you to put those questions in the chat pot and we'll be happy to take your questions now. Alright hello again everyone and thanks very much for for your attention so far and and for. I hope that you enjoyed seeing both the presentation as well as the demonstration of the new open architecture for radar and EW research. And so, as I mentioned before we we have Philip on the line right now. So hi Phillip. Hey, how's it going? Hey everybody. Alright, and I I see that there's a. There's a few questions that that have come in already, but please do continue to to post questions. As as we go through this Q&A session so so Philip there was one that I answered and pushed out in the chat. Just as we were. Just as the webinar was running, and so and the question there was, can I use this software with other US RP's? Is there anything else you'd kind of like to like to add onto onto what I pushed out to the attendees? I think I mean I think you covered it pretty well. We, you know we built this around the N 320 and 321 specifically, though it's using the base UHD drivers, so portions of the data streaming application would be applicable. You could use them as examples, but it won't be the setup and configuration scripts won't be optimized for different usurps, though if you have a really specific use case and and and you think it would be really valuable to expand this architecture to those usurps, that's a conversation. I'd be very happy to engage with. And asked to consider. Excellent thanks Phillip and so and so the next question so. So we've got other distinct differences between a four and 32 channel system, and can I go beyond 32 channels? I'd love to take that question so there's not a distinct difference in the software we intentionally built it in a very modular fashion so that you can expand between channel count as you needed to. So the key difference will kind of be in streaming performance that you get right now, and that is not limited by EU. S reps, but really limited by the throughput available on the system. So right now it's it's limited by the server that we're using and the and the throughput capable for that. Server it means that you can scale this architecture differently if you need higher streaming rates, and then as we make updates to to UHD, I even expect that to improve more. So there's not really a significant difference on the second part of that question on the scalability to 32 channels. So we built this like I said intentionally to do 32 channels. We tested it on 32 channels and then our lead user that we are providing this with after we got everything working immediately asked us if we could support 64 channels, was the first question and so we've done. So we've done a little bit analysis. We're pretty confident in this sport. Supportability for 64 channels. For this architecture, the N 320 itself natively we could expand to 128 by 128 channels, and that should be supportable. In the architecture we just like I said, have not have not developed the configuration scripts or the the the setup scripts to do that automatically and and those scripts what they're primarily doing or are making sure that IP addresses and communication paths are set up appropriately. Defining who's going to. Who's going to be the the master Ello Ello device? And who's going to be receiving that LO device? And some of those type some of those type functionality needs would have to be changed, but the architecture itself should support that really well and probably valuable to point. I don't know if I made this point well enough in the. In the webinar, but you know we're providing this as a starting point for customers to use for, so I I want to make sure it's clear that this is all open. Are open code so that you then can take it and start implementing your own algorithms into that, whether they be in software or on the FPGA using RF knock in the USR peas themselves so it's a fully expandable and and configurable to your own use cases. Yep, and just just one thing to choose to just add on the 128 channel system is if you're interested to to look at how to actually go about configuring the N 320 and 321 and the yellow sharing in up to 128 channels. There is a A KB on the ettis.com site that already shows how to actually do that, just in general with the with the N 3:20 and 3:21. So I'll see if I can add a link. Add a link to that and and push that out in the chat here as well. So just moving on to the next question so this this one is about DPDK. So what does DP DP DK do and how do you know it will increase throughput? That's a great question. So DPDK is data plane development kit. Remember it's an Intel product or it just works with Intel NICs, but what it is is you can kind of think of it almost if you're familiar with kind of some hard may in some other kind of direct memory access paradigms. What it allows you to do is buy bypass the kernel kernel level drivers in the Linux system and write directly to userspace menu memory from the Nic card. So what it should do and what we fully expected to be able to do for us is to basically allow us to scale. The data writing aspect or the data streaming aspects of this reference architecture in a way that no longer is constrained by just the throughput of the the CPU itself or throughput of the the kind of kernel level. Oh, or the Linux side OS. So we fully expect that to be able to get us at least two at a 30 even at a 32 channel system. I expect us to at least get to 100 megahertz of IBW that we could support if not more. And we expect that. I'm I'm hopeful at latest sometime next quarter, if not by this quarter. To be able to have that within UHD and I probably should also add it. Does it actually? DVD is supported in UHD. Today however it's not supported on the latest version of DA is required for the Nic cards that we are using, 'cause there are later generation Intel NICs. So that really is kind of the the delta there, so it's not that dark is not even supported today. It's just the Nic cards that we're using to get the the channel. The Ethernet port density that we wanted for this system is not supported by the current version of DPDK and that will be integrated. Upgraded. Like I said with by sometime by next quarter if not earlier. Excellent great thanks Phillip and thank you for the question. So I'm moving on to the next one, so how much? Crushed is the system cost. So so as I can start with that, start with that and then fill it. Feel free to add anything but basically from a from an overall software examples and documentation perspective they're all freely available and open source. You can go onto onto the GitHub link and and actually download them right away so there's no. There's no cost for that. Those aspects of the reference architecture. To give just a rough idea for the hardware components of it. So for the US RP's the auto clock clock distribution modules and cabling. They're kind of NI parts of that. So if we're talking U.S. dollars, then per channel it's approximately 8 and a half $1000 per channel and of it and it'll depend. There's some some scale depending on whether you're going for a a smaller. Or a larger channel system that this is just purely from an I list price. If you're, I know there, I know we have some attendees here from Europe and from other parts of the world in euros. That's a little lower about, I think seven and a half €1000 and just to give a rough estimate. And then on top of that, there's obviously the third party components like the Super Micro server is about $9000, the Nic cards, or about $500 each. So just to give you a very rough idea and we do have a bill of materials on the aethis. Sorry on the on GitHub, which tells you everything that you need to kind of put the system together so Philip anything to to add there. I think from that perspective you covered it pretty well. Anything else I'd want to add to that? Cool, excellent. So the next question along is can I program this in Labview? That one is a quick one and the fact that we did not. Yeah we did not support this in Labview. Yeah, and I don't. I don't see us in in the short term. Porting this over to live. It is primarily an open source and uh and UHD based solution. Excellent. So then moving along, so next question is, can this be used for prototyping communications links? So that one is, I think, a really interesting question specifically, because right now, the way that we do the transmit side of this is we're actually using on the USRP within RF knock, which is the the FPGA framework built on the built on EU. SRP there's a function called an RF and RF not play block or replay block, where basically you're replaying out of memory. Segment there's about, I can't remember it. Aircraft up my head, but I think it's a gigabyte or two gigabytes of memory that's on the US or P, so that's what you can play back repeatedly and and coherently to there to receive channel. What that means though, is it's not perfect for every communication application because it's not a dynamic transmit stream that you may want for comms applications. But that is something that we're wanting to upgrade. So I was the way to answer that question I think is today there's aspects that you can do for college research, things like if you want to do beam waiting. Or or some type of MIMO transmit and receive algorithms. You can prove those out with the system as is today and then as we release upgrades to this and and upgrade true transmit streaming, you'll be able to do that. Right now the streaming is from the receive side is is what streams all the way straight through the US RP and then back to the back to the server. So we will we will soon again. I think similar timeframes that dry. I'm hoping we'll have that in the next couple of quarters or so. That's important to you. We'd actually like to talk to any customers that are really want to use this in the calm space right now, because I want to make sure there's nothing we're missing in those work. Those customer workflows that we might want to update in the process, but it's primarily targeted for EW and radar applications where it's either received streaming or like in a radar where you're doing pulse pulse transmissions, and you're not trying to dynamically change that. What that transmit payload. Excellent, so just so we we've got one more question in the Q and any if anyone else has has your questions now would be a good time to add them in to make sure that we that we get to them before we. And closeout the webinars. So let's get to this. This last one and see if anymore come in in in the meantime, so the final one here is are you providing the radar specific DSP block? So that's one thing that we're not doing. We're really building this as a platform for customers to do research in this space and utilize their own algorithms that they're already trying to validate and test. There are, you know, because this is built in UHD, I know that there are good new radio radar functions that are built. This is running, you know, this is built in C++, so there's not a good new radio block that you can drop in for this reference architecture right now. It's not necessarily something that I'm planning on, but there are a lot of IP blocks. Available within that GER GER radar infrastructure that you can utilize and then like. We show like we had shown in the in the demo briefly, there really intent is you can directly take things like out of MATLAB'S radar radar toolbox and utilize that directly with within the reference architecture and utilize the data that you're generating from their reference architecture in that processing environment. So we're intending to make this to. We're not really, really trying to dictate or provide the radar pieces of this, we just want to provide you the platform that allows you to do that work that you're already doing faster. Excellent, but thank you Philip for the answer and I haven't seen any other questions come in in the meantime, so this is the very final call for questions. Get them in very quickly and if there's anything else that you'd like to up that you'd like to ask Ann. Phillip, any any other kind of closing thoughts or anything else you wanted to add prior to finishing off? I think the only other thing to add is that now that we've just released this as customers start to use it, I'm I'm. I'm just as interested as understanding. Kind of a your user experience, so if you're you think you're interested and want to kind of talk about some of the in's and outs of how it works in your tool flow and your workflow, we're happy to support those conversations. You can reach out either to a direct seller or you can reach out via email to us or variety of different ways whatever way you want to get up with us, we're happy to kind of engage and support that conversation. And as this is a continuing and evolving platform, we actively take that feedback and kind of and fold that back into the the next next generation of releases. Of this excellently. Vote I think then that no other questions appear to have come in, so just the final thing really to say is just thanks very much everybody for for joining us today. And if you do want to to get in touch with us individually and or kind of offline then please do reach out there. There's a link in the in the engagement widget where you can where you can get contact information and get in touch. We'd also love it if you could just take a few moments to fill in the survey that will be listed at the bottom of the screen. There's one of the engagement tools. And that feedback will really help us as we continue to plan webinars and we want to make sure that the content we're presenting is relevant and useful. So once again, thank you very much to everyone for attending. Thanks Philip, for for presenting and and joining me for the Q&A today. And yeah, hope you enjoyed the webinar and hopefully will see you at the next one. So have a great day ahead and see you soon. Thanks all.

As threats and countermeasures evolve, researchers and systems engineers are challenged with bringing new radar capabilities from concept to the lab to fielded systems. A major obstacle on this journey is the time it takes to migrate IP from simulation to firmware, and to build up boards and infrastructure to form a testbed for assessing the real-world performance of novel algorithms, waveforms, and components.  

Join NI for an introduction to a new architecture built on software defined radio, which provides an advanced starting point for radar prototyping, and abstracts the complexity of synchronization and data movement for multichannel systems. In this webinar, we’ll show how to incorporate IP from modeling and simulation tools into a hardware-based radar prototype.