Well, good morning everyone and thank you for logging into this webcast. My name is Graham Greene and and so this webcast we're gonna be talking about test standardization and specifically the stages that you go through to implement a standardization initiative within your organization. We're going to be talking a little bit about best practices, and we're going to focus on a couple of different case studies and mainly one from Phillips, you know, sort of medical industry. One from Jobro GN audio in more of a consumer electronics industry, but the industry is is less relevant as the processes and the and the tools those those people used as they went through. The format today. This was actually a webcast that we did a live live a couple of months ago and we are back by popular demand and so there was a lot of feedback we got afterwards around the relevance of it and there's some really good conversation. And so today we're doing something somewhat hybrid in the form of. I'm live here today, and I'll be joined at the end also by one of my colleagues and the real value is that we're going to dive into some of your questions, and we're going to play other. Some of the content from the from the webcast, and that we looked at. Uh, but I would love throughout it to be engaging with you on the chat. So in down here we have group chat where we can all talk to each other. But we also have the Q&A feature. The little Q&A box within this environment. That's where I'd love to get your questions to be able to dive deeper into some of the subjects we topic and some of the some of the subjects we talk about. I'd love to give you get your ideas around how you might implement it or what further challenges there are. We did one of these within the European region this morning. We had some really great conversation around the silos built up between different departments and how you bring them. Together around connectivity of test systems around the structure of your team and how you can drive proficiency within that team, a lot of ideas for questions as you go through. And then I'll come back online at the end and we'll try and answer some of those questions in that format like this. And so with that I would say thank you for thank you for logging in. Thank you for spending the time with us today, and let's start by looking at the content and then please yes, please enter questions throughout the content and I will write replies in in text during the content and I will speak to them afterwards. Thank you very much. Hi. Hi, I'm Graham. I'm on the solutions team here at and I I'm working on production test today. We're going to talk about software standardization. In general, we're living in a challenging and exciting time to be test engineers. The devices that we're testing. Whatever industry you may be in becoming more complex as the functionality and the quality of those razors. And the environment within which we're working in the form of operations is becoming more demanding. Now these two trends that a lot of pressure on test engineering groups, building up into challenges from throughput to sustaining quality coverage, you name it. I'm sure that you're feeling the pressure right now. If we take these pressures and we put them into this wheel diagram. We can see that. An effective test strategy is not prioritizing just a single one of these to meet a single problem. An effective stress test strategy balances all of these to be able to prioritize resources in the right place. We can also flip this idea of a pressure around and say that these aren't just precious being felt by the test team, such as development schedule, but their values we bring and deliver back to the business, such as the ability to take our product to market in a short period of time because of our development schedule. Now as we look at standardization initiatives, we can see benefits across all of these. You know the ability to reuse code, eases regulatory eases our ability to meet regulatory compliance. It shortens our development schedule. It also makes sustaining more easy because people are more familiar with it. If people are using a single tool set. Then they're more comfortable being able to develop code for a larger area of functionality, and so we're able to test more, raise our test coverage. If we're putting more, if we're more proficient in a tool set, we're likely to develop better quality code, and we can put more resources and more scrutiny into those into that code, meaning that the quality, the stability, and the reliability of our test stations goes up. Of course. Standardization is not just about lines of code, it can also be about data types, formats. Being able to standardize the way that we published data from test stations and other assets into the cloud into a data lake into a database or whatever. It may be going. And so we start seeing that the standardization initiatives, if we can invest in them, can offer long term benefits across all of the different values of tests that we are asked as test managers to provide. Well, standardization initiatives are just so perfect and add so much value. Why we not all doing them all the time? In fact, why do we not do them? Ten years ago? Well, the reason is because they're hard. They're hard to justify because they can. They need upfront investment from a leadership in form of both resources and funds. They're hard because they are hard to implement from a technical engineering perspective. Building platforms is not always easy. They're hard to drive adoption of them and change management across some organization. And then that hard to measure that success will be able to calculate that ROI to ensure future initiatives are funded. And also the maintenance of the platforms you've built. And for this reason there are engineers all over the world who are stuck in this loop of perpetual siloed development at individual test stations, and it's hard to get out of that loop and build a more holistic test strategy. One of the reasons it's hard is because when I look at these challenges, they're not all engineering challenges. There is much business challenges as they are technical challenges and that reaches the root of who we as and I are and how we can help you in both sides together. Period and I we know the best in class test departments aren't just building test stations. You spend as much time in being able to champion the value that your department can bring and meet wider business goals. And so we're here to support that. We're just here, support you in yes your integration and instrumentation from the software side, but also looking at the business goals and objectives you're looking to achieve. Today we're going to take this concept of standardization and break it up into four constituent parts. You've got process, the architecture, the tools and technologies, and the people and the proficiency. Now often many teams will be looking to standardize on one or more of these. You don't have to do all four, but I think all fortunately speak considered. The process is the methodology right from your initial concept. Through your implementation, your deployment and maintenance. The architecture is the framework within which your engineers can develop. The technology is the tools that they're going to use to develop in writing those individual test steps, or whatever it may be. And then the people in proficiency a really important because the strap plan or strategy is only as good as the people who have to implement it. And you can have the best architecture and process methodology laid out. But if your people aren't in place to be able to execute that and effective manner, then you're not going to get the results you're expecting. So we're going to use this slide. Someone has an agenda and we're going to walk through these four stages. If you like of software standardization. Now for each of these stages, there is both a technical and a business requirement. We break this up here at NI into three steps. The first is your value assessment. Here we're trying to understand what is the business need, what is the return on investment, and how do we articulate and calculate the return on investment from the initiatives that we're proposing? And how do we build a case to the decision makers that we should enact upon this? Once we have that model, we then need to understand what to do first. We need to plan our resource is we need to set out and forecast our timelines and be able to build an execution plan which will allow us to implement this successfully. And then of course, there's no value to the business until you've actually deployed it or delivered it, and so you have to execute the call according to the plan that you've just made. Being able to implement, develop, or integrate the systems which you have promised. We're gonna use this framework for each of the four stages, and we're going to cover probably not totally evenly, every step along for each stage, but you should be familiar with these words of value assessment, execution planning, and value delivery as we go through 'cause we're going to use them several times to talk about what stage of implementation. So the first stage is the process. Here we're looking at the methodology for test now. All modern software as it gets more complex will have a life cycle look something along the the diagram at the bottom of this slide. It's probably more familiar to you as a process or a circular process rather than individual line. And by implementing modern development techniques and more modern and process methodologies that we would we all be familiar with from more product development, we should absolutely be bringing these into the test development world. Not only does that allow us to be able to map out our resources more effectively, but it allows us to be able to think beyond just the the development of the station itself. And create a more comprehensive plan. Now the critical thing that we see at this stage of process planning is really that value assessment. This from your. I made a list here from ROI and metrics to processing skills management benchmarking against industry best practice laying out what metrics you're going to have. This is probably the key. But the key investment area that we see people stumble on in different test departments across the world. And so this is probably where to spend time on your process. To back up that statement, here's a quote from senior test manager and I'm pleased to say a friend of mine working who's worked for many decades at Philips in their ultrasound department. From his experience, the ability to be able to articulate the business value that his team is bringing to the organization and to be able to break the relationship between the complexity of the tasks that he and his team has. And the amount of budget that he and his team are asking for to solve them. If you can articulate that effectively. That is the key to be able to get sponsorship from upper management for the initial investment that is needed for a proper standardization initiative or whatever level you're looking at from processed through architecture, software or whatever. Once you have your methodologies and your process in place and agreed upon. The next stage will look at your architecture. Now different groups will prioritize and build their architecture in different ways. If you look at the aerospace industry, the goal of a lot of the test architectures there is two separate. For example, the choices of hardware they have available on the market today with their code modules they have running their tests, they know that they have to maintain these stations for years, decades, and they know that if they need to change out one component of their system, they do not want to have to recertify the rest of it. So there's a lot of abstraction layers and measurement abstraction. Hardware abstraction. In other cases, the reason for having a standardized architecture. Is to be able to have a core team that can govern the quality of code that's going out of the door. And yet more cases you have your standard architecture as open as flexible as possible because you know you're having fast iterations of Npis or products coming in, and you need to be able to get them out the door quickly. And be able to meet any new test requirements that come along. Now. Whatever the reason for your building, this framework for you to develop within in generally you'll still go through these teams that same steps of assessing the value that you can provide and building a business case to develop the architecture. Building you're executing. Executing your plan. And then delivering that value and some of the values you can see you showed on the slide here. Let's look at a couple of examples of how different teams have architected and prioritized differently for their software standardization. For this, we're going to use. I'd like to be more familiar with this diagram block diagram of a testation. I use this diagram to represent the different elements right from the duck and the connectivity at the very bottom through the infrastructure of that testation, your rack and connectivity in your fixture and maybe chambers. Food, the instrumentation and sensors right in the middle. That's taking a core part of actually generating the data, which is the foundation of any testation. We then move into the software. Connecting two the drivers in there, then looking at the measurement steps or functions themselves. Those that have to be sequenced into some kind of management system and then the data from that sequence is the past, the fail, and then the raw data has to be passed into some wider networks so it can be referenced later. In that same level, at the top, the analytics and operations will start seeing more system management, deployment management, health management. This standard format of diagram you'll see a change a little bit in the in the coming slides as I talked to different customers have one I've got has got a slightly different setup, but in general it follows this format. Now the team at GN Audio you may be more familiar with them by some of their brand names, such as Jabra. They make personal audio equipment such as headphones. Their goal was to be able to reduce the development time on development schedule they had for bringing new products to market. So within this they looked to standardize as much as they could on all of their hardware, and they stand it on. And I did record data acquisition equipment with a custom rack and fixture. And then standardize where they could on their software. Now for this they obviously had custom measurement functions. Note many of them written in Labview, but many of them also. They could then take off the shelf by looking for third party. IP in this case from a company SIM Dot AS based out of Denmark who are experts in audio test. By using those management tools, and then again standardizing right at the top in their analytics, they were able to reduce their development time for months and weeks because they looked at every part of their system and identified only the only the really nice set real necessities for custom work were kept custom and everything else was either off the shelf or a standard component. If we take this level of standardization level deeper, here's an example from Phillips. Now I say deeper and what I mean by that is that we're not looking at standardization for a single product line, or even if you put outlines here, they were building an infrastructure of a standard testation that could work across potentially many departments. Within this they have the same set of. Standard rack and instrumentation. Above those instrumentation. They had a set of instrument drivers, allowing them to switch out different modular instruments depending on the different needs. The clever part here. Was the above that? They had a very complex test architecture with standardized test steps. So, whereas GN audio are still writing custom test steps or importing them from the cat software from SIM, here we have a library of measurements that are available. And so the user is writing just writing XML configuration files, which is setting up that sequence. Maybe setting limits and calling those test steps. Now to make this work on the right hand side you can see they had to start set up, not only a standard software architecture, but also start standardizing on the layout of their organization in the form of the roles and responsibilities on their team. They built a core team. Which was responsible for the core infrastructure and the core test steps that they had. And then they had local teams that could either. They could write their own test steps if they've had some unique functionality that was needed, and then they could donate those test steps as they thought there was a more general purpose use case into that separate library which is managed by the core team. All of this is abstracted away from the user. Who is just writing XML files and they get to see what functionality is made available to them or request it from local or core teams as per needed. We'll come back to this case study at the end and look at the benefits they got, but I can tell you there was significant in the form of both development time, development costs and resource management. So we've looked at different levels of architectural standardization course. We could go a lot deeper into the individual pals and males and other software software delivery mechanisms, but right now I would like to keep this at a high level. 'cause again I think one of the challenges that I see in architectural standardization. Is that teams will jump straight to value delivery and they'll start coding. Without necessarily getting the scope of the value assessment done in the form of to get the correct level of sponsorship for the amount of development they need, maybe across in the Phillips case that was across multiple teams, wanted to buy into this project when they heard about it and that allowed them to be able to invest more into the infrastructure because they had the resource to do it. And then in the execution plan, I'm setting expectations for and forecasting timelines is not an easy thing to do, but time spent here in my experience is well spent before jumping into that value. Delivery of building out your plugins, building out there are architectural layers or your data infrastructure, your connectivity, whatever it might be. When you do get to the value, one best practice that I have seen is to ask the question of what do you want to keep in house. And what do you want to outsource? Outsourcing doesn't necessarily mean that you couldn't do it in-house. The question is should you be doing it in House. There are certain elements of your project which could have specialist needs and often this expertise can be brought in from outside companies. That or consultants that do this all the time. It can both speed your development. It can derisk your development and it can allow you to load balance more effectively with the other work you have to do. The picture here, by the way, is Dennis from SIM and Christian and Chava from GN Audio, who we just mentioned. We saw their standardized architecture and they were very successful. By working together and one of the highlights talking to Christian around why he felt that product or that project was successful. Was his ability to be able to draw on not only the senior engineers on his team, in this case Java, but also the engineers at SIM who had a lot of industry experience and best practice they could bring and and they using using their tools as well. Once you've standardized on your diagram, the next thing to look at is what tools to use. So if we return to our standardized diagram for a test station, this one a little more functional than the previous two, which were custom examples from real test stations. This generic example you can see so many different types of things that might need to be done, and for each thing that could be a different tool or multiple tools. Imagine if each of these instrumentation pieces of hardware came from a different vendor with a different driver set and a different set of integration commands or communication commands, different documentation. Now imagine if all of the different software blocks written here. Are written in a different language or using a different tool. That was the. Off the shelf preference of the engineer who built it. Suddenly down the road, we run into a significant maintenance and sustaining problem, especially if those first development engineers move on to different projects and one available. If you need to update or change something in this architecture, it can be a real struggle and I've seen this with test groups which haven't standardized on the tools that they use. The challenge is that there's somebody tools out there. I mean, even from just and I we have multiple tools which could be used to complete certain stages within this diagram. And so from a from a leadership technical leadership perspective. The recommendation of best practice is to pick a standard set or tool set which you can then standardize on for all of your engineers, allowing them to be able to each engineer to be able to transfer their skills. Working on different elements of this project and also be able to pick up different projects or older projects and be able to quickly ramp up to understand how they work and be able to fix or change or customize. As needed. There are three main toolset decisions that you need to make. As the test development tool. So here we're looking at the algorithm development that estepp development and maybe some embedded code within that. You've got your execution framework here is going to be your test sequencing. Maybe your parallelization configuration within that, and maybe some of the interlocks you have between your test steps and maybe your XML files or other areas of data publishing. And then you have your systems and data management software. This is going to handle deployment. This is going to handle looking at the health management of any of those test stations, and it could also handle and act as the go between taking your data, publishing it up into databases, or any S systems. Major Lakes in the cloud. Wherever you need that data to go to be able to provide analytics for your operations, team to be able to optimize your processes. Now for each of these unbiased, I would suggest Labview as a optimized development tool designed specifically for test engineers is an industry standard across the world. The test and is our commercial off the shelf, a solution for execution frameworks, again, an industry standard adopted by tens of thousands of test stations across the world. It abstracts away the maintenance of building your own and maintaining your own sequencer and brings in a lot of specialist functionality into that engine as well. That maybe you wouldn't have developed. And then system link is a new addition to our platform. Over the last few years and allowing that connectivity to be brought into those test stations to think of them not as a individual system, but being able to think and standardize your data infrastructure across your whole plant or factory or even organization. And then providing those analytics to be able to understand how things work. Now, each of these there are multiple options. You could develop things in house. There are open sourced versions or there are other available versions on the market, but I would. Encourage you to look at these three tools and what I would consider best in class for these three functions and you should have a plan for these three. These three functions in test development, execution framework and systems, and data management, whether it's from an I or not. If we come back to. GN audio, which look at what they chose for these three areas. In this case they had great success. You can see their test coverage was going up, but their test development time went down and they credit that towards. They use a lot of Labview and Simba and cats from Simla S for their test development. The reason they use the cat software is because in audio test there's a lot of very specialist algorithms, and being able to take off the shelf IP for those measurements really allowed them to be able to trust in the quality of their measurements as also and also speed up the development they had. They use teststand, which integrates well with it. For their test management, they use Labview for some periphery tools along it. And they actually they then have an in-house system that they were using for a systems and data management in that case. As I said earlier. The plan or strategy you have is only as good as the people you have to execute it. And we know through working with a lot of engineers for many decades here at, and I that engineering groups that adopt and have a strategy for educating the in the proficiency of their engineers. Found to be 50% quicker in developing a 43% less time needed on maintenance because of the reliability and the stability of their systems. And 60 cents 6% faster in the time it takes them to learn rather than when compared to just on the job learning or by mentor learning. If you can invest in the proficiency of your people, this is therefore not only good for the quality of your standardized system, both your process, your architecture, and the tools that you have. But also you're developing the careers of those individuals, building and developing them as human beings as well. One of my favorite quotes around. Proficiency is from Chris Chileno, great engineer and I'm pleased to say a friend of mine who's worked as a test engineer at many companies and fewer industries and now works as an individual consultant for test. His philosophy is that the training and individual and how to use a tool such as Labview is relatively easy. You send them on a course, they're smart, they pick up how to use it. The challenge is said here is implementing a process for engine every engineer to a deer. To those guidelines that they learned and supporting the initial success until it becomes a habit. This needs a structure for mentorship for code review and building, not just individual proficiency, but team proficiency. If we look at what a team needs to be proficient, yes they need the developer competency as marked in the middle of this slide. This relates to the tools and technology that we talked about. But they also need proficiency in the software engineering tools. This is more relevant to the architectures and the processes or methodologies that we've talked about in this presentation. That allows engineers to be able to match what they're doing. In a repeatable way. Then they need the technical leadership. In the form of they need that strong architecture architect level engineer to be able to build out that architecture in the first place and be able to coach all of the other developers on the team to be able to use it effectively. And lastly, you need the Community learning. The scope of the capacity for your team to excel. If it is bound, if they never talk to anyone else, it is bound by the most proficient person on that team. If you are able to build a community where they can share ideas not only on their team, but with other engineers within industry. But then you unbound the potential of your team, allowing them to learn and to benefit and adopt industry best practice from beyond the four. The confines of of what they already know themselves. Here then I we use these four building blocks to measure team proficiency, and we can assess and coach on that rather than just individual proficiency. Think of a training course or an exam. On the basis of education services, if you are looking to build up the proficiency of your team, do look into both the training courses, entitlements, certifications that we can use to support you with that. While I'm talking about the opportunities that and I have to support you, it will be a miss and not to mention some of our methodology methodology consultant services. I said at the start that and I believes in helping you with your test strategy. Not just in building attestation and integration, work hardware and software tools course will do that as well. But in helping you as a test organization, meet your business goals. And if that means. Coming and talking to you around. What is your value assessment? Interviewing the different stakeholders, helping you have an external view on how your business could operate more effectively. We have a team in place bringing together hundreds of years of collective test engineering expertise to help you with just that. 'cause once you have that. Methodology in place. We then have the integration services we talked about outsourcing of some of those planning and or development, and of course the hardware and software services on a tools level as well. I'll finish with an idea of what a successful standardization initiative can bring to your business. Returning here to Neil Evans, we've mentioned him a couple of times. Uh, and this was in this case this was building a standardized test system and software architecture and process for his team at Phillips. They were able to reduce their development effort by 80%. For each MPI that came down the road, by being able to abstract a lot of the development work for each new project into this standardized framework and architecture. With that they saw massive benefits in the sustaining because there was a single. A single codebase too that was familiar with people. And of course that codebase has had, has had far more much, far more scrutiny on it from engineers before it was deployed down onto the lines. And so they've been more efficient along the way with their cycle time on their line themselves. And of course, their certification time reduced as well, because the documentation needed for that circuit. Mentation only had to be written once for large chunks of their code. Ida. I'd like to thank Neil for all his work and also for sharing this story with us. It's been good working with them over the years and he's been an advocate of anion both on the relationship and the people side and on the tools and technology side as well. So what are your next steps to learn more about this subject? Well, the first thing I'd recommend, as we've covered a lot of content today. A lot of it pretty quickly and pretty at high level is to download the solution brochure for software standardization from our website. This will go into more detail around the software and services that we've covered today. Of course, to dive further into those also on our website, I would look at some of the content around Labview and Teststand system link our methodology consulting services are education services and there's a lot of information there finding which components of that will be tailored to your needs. There's a lot to read course as there's a lot to read. We'd also be happy to help you and talk you through it, and so do contact us today, either around the hardware or software. We've products we've talked or for a consultant consultation around whether consulting services and our methodology group would be the right thing for you to start looking at some of these more business level value assessments. OK well thanks guys. Thank you me for for saying some of those things up. So now we've got an opportunity to come to have some QA and actually engage a little bit more. I know that these these types of discussions, I think the real value is is in being able to go a little bit deeper than the content. The content does and be a little bit more specific to our own applications. Uhm? So I'll I'll invite invite. I believe my my colleague Caitlin is here too. I hope you can, you hope you can see and hear me to tell me. If you can't, I can see her. Can you hear me? I can hear you. So. So the first question. So we got a question. Uh, we got a question from from Edward from Creation Technologies, Uhm? Which was that was interesting. I posted it during the chat around where should I stand out? Should I stand those first on process or on tools? My take on that and the answer was. That they kind of come hand in hand. But if I had to pick one, I think I would go process first because. Uh, there's a nice I was the customer. I've been visiting this week. An example of it he when he first joined the team he sat down and they said, oh, we've had we've got an issue with one of our test stations. You're the Labview guy. It's written in Labview, therefore you should be able to fix it. And he sat down at this thing, and after a few hours he just gave up and they started from scratch because it was. It was not written, did not adhere to a process or the code did not adhere to a process that he could. He had any any formulation or understanding of and so. There is a benefit of everyone is using the same tool because at least you're more familiar to be able to understand, but I think it's more important to define the define the cross development processes and standards and architecture and all aligned on the same one. Even if you're using multiple tools within that architecture, the tools just make it easier to do. Uhm, what are the questions if we got Caitlin? I am trying to. Get to them, Kelly. Can you maybe get him that Eddie List is blocking the questions and having trouble navigating that's OK? Well till I've got this one here we we've got a team. This is an interesting yeah, there's an interesting interface from Greg do could you give a brief details on system link and how can help? Is it? Is it a steep learning curve? Is it able to connect to an eye test? And so yes so. System Link is it's it's pretty wide ranging so it's that systems and data management. There are different sort of elements within it. The basics of system link is it is a tool for connecting your systems and your data. And so purely the system health management understanding. Understanding what systems have I got? How are they operating? Are they operational? Are the operational? Can I find them all when I'm all the instruments in the amount of Cal deploying software code or code revisions to your systems over the network? There's a lot of management stuff that comes into it. There's no real learning curve for that. That's pretty self explanatory. When you move into adding more of instead of just. Systems management, but more data management. You then get into saying if you link it into test and it will automatically look at the UM, the different metadata on your test and tests and then pull that up in aggregate that into the cloud and start building dashboards so you can automatically start looking at things like yield or throughput dashboards across all of the systems. As long as they're running, test and and then you can, it's got client. Uhm, client architecture. So you have a look like the system link client running on your test station. Could build those up. Now of course the stuff where I think there is a learning code is that learning curve is that. To get real value you want data from all of your test stations, not just a few of them that maybe are running test and and maybe not just your functional test. Maybe you need ICT. Maybe you even want data from some of your assembly stations. And so then you've got to work in your data, standardize your data infrastructure to be able to get aggregated all in the same place. Or you've got to do some more complex analytics to be able to get trends out of it. So there's a bit of a learning curve there, I would say, and so the more complex implementations. Generally we would we provide system link with. We've seen it most successful with some services implementation services to get the thing running. And and with with that you can either run with the pilot or not, but that's been the most successful way. Just because the data and the IT infrastructure stuff is stuff that test engineers aren't, it's not necessarily a core competency test. Engineers. It was a long answer. I hope I answered your question. Yeah, do talk to us about system link. It's super exciting. There's so much so much we can do that beyond test. What else have we got? Hey grim, there's a. There's a question that I think would be interesting for us to touch on from from alane do we have any anecdotes or advice on overcoming leaderships resistance to integrating outside elements into production, such as network services like system link versus building their own tools in house where there may be more trustworthy? Because the IP is all within the company. Well. I mean. We do get this a lot. The two things that I would. The two things that I would say are firstly it's all about goal setting. I think. There's a lot of cool technology out there, and the biggest place that I see it fail is when people pitch technology rather than pitch outcomes and generally getting traction for Anubis technology is seen as a risk, but getting investments to drive a certain business goal or outcome is seen more as an initiative project. Uhm? The in-house outsourced thing. Uhm personally. I mean. It really comes down to build versus buy message, depending, which is somewhat cultural within your your company. The difference being that. The I don't believe the core competency. Like I said, the core committers of test engineer is not it or databases or network infrastructure. Test engineering generally don't have the skill set or don't add as much value doing that kind of work, and it don't generally understand the test and manufacturing world well enough to be able to implement that. Implement that effectively. And so I think being able to take something off the shelf where you can get a high level starting point and like as I said before, there's usually some integration work which will need some a mixture of. You'll need to get that input from your test engineering team or in your sustaining team or whatever too. Understand. How to implement? But yeah, start with goals in mind. I'll add to that briefly and I know it's a long answer. Uhm? It's very tempting in these situations to start with a pilot. Is a phrase that I've started hearing a lot recently, which is pilot purgatory that you get stuck in in the form of you teams that. Are perpetually running pilots and never get beyond the pilot phase. There's two reasons that I see for this. One. They don't necessarily start with the goal in mind. They as I said, they start with the technology pilot, and so they haven't necessarily gotten outcome. They're striving towards so it's very hard to be able to tell if the pilot was successful or not, because you don't know what what the goal was of the pilot. The second seat way I see them fail is, UM, if the goal of the pilot was set at a small enough scale that it didn't have a significant impact on the business. So even if it was successful and you did hit your goal, you still don't get traction with management because they don't see that it made a big enough change. Uhm, and so if you're looking to drive adoption of a a product like system link or or systems management software and start with the end in mind and scale your pilot at a a size that management will take note. Those are probably my two best practices. OK, what else we got? Do you recommend making your test stand more automated and the least user operated or vice versa? I mean, in general I'm pro automation. The and when you say test. Test and I presume your meet your meaning because there's two types of automation. There's test automation, so running through a series of tests. And then duck handling or automation. The first of them I would always automate as much as possible. It eliminates the human error. I think the and in general I found every order every operator I've ever talked to. Is a. Is happy about a test the test stations they they like are the ones that automate through more of the processes which allows them to be able to focus their time on the non automatable processes, which is usually more of the dot handling or some of the other bits. Or the data input and things. Uhm, so yeah. In general I'm pro automation. The dot handling I'd automate in most cases, but that can be complex and expensive unless you get to really high volumes, so I understand that's not always the case. Sounds good, we have another one. Do you have any recommendation of test related software best practices at the architecture level and or coding standard encoder review in a general industry sense? Not necessarily Labview or and I focused. I mean, this is a. This is a. This is a A. This could be a, uh, a three day question, you know course rather than a 5 minute question. Yes we do. I think the best way to answer this. We have a program, well, it's just you said, not specifically Labview. Yeah. I suppose the way I would answer this that I talked a little bit in the webcast about individual proficiency of those team proficiency. You could go through individual proficiency. You could just talk about, you know loosely coupled code that you know in individual code modules that can be that can be easily separated. I could talk about. Hardware abstraction and measurement abstractions. So separating or abstracting you're the instrumentation you're using from the software code modules that is running it. All of these are larger subjects though, so I would say. For that one, get in touch with and I we have a ton of content around this subject. We have a ton of experts around the subject and we even have a we have both individual and team proficiency based programs where engineers can uplevel on their software architecture. So a bigger question that I can answer now but get in touch. OK, let's see. I think we have time for a few more. I work for a company that has a lot of people developing software using Labview, but they're all very scattered across multiple groups. How can I go about convincing management that these developers should work together and collaborate more? OK, great question. I think the UM. I think. There's an element, so there's I. I can see this in two ways, right? There's element. OK, there's a. There's a variety of people using Labview cow. Can you get them to work together? Hopefully that's an easy one in the form of why would they not want to work together at an engineer level? There's some ground up things you can do. Even if it's as simple as starting user group where people can start sharing best sharing pieces of code that they're working on sharing projects they're working on and start gaining a bit of traction ground up on that. I think managers. I've never seen a manager who will deny time for engineers to come together and share best practices on the engineering work they're doing. It can be hard to find the people, but that's what I would do ground up. Top down the challenge here and this is a challenge not only for using this since the same tool, but also use standardizing on the same processes. Is that every group of engineers has their way of doing it and they think it's the best way of doing it. It's their little Kingdom, right? And they put walls up around their Kingdom and every group every every Kingdom within the organization is looking to take over the whole empire and ever they want everyone to do it the way they're doing it. And so you get these battles between. Well, my pro. Should we standardize on my process or your process or their process? The thing that's hard to understand, or sometimes people don't don't understand there is that all the processes are good. The challenges that the each team might have different goals or metrics that they're striving towards, and so from their own perspective or lens their process is the best. Because they may be targeted on a different set of metrics than your team is. Uhm? To align this. The best practice is to go upwards in the organization and you've got to find a way of. Elevating or showing the cost of misalignment across the company. And all these teams using different tools or not talking to each other or using different different UM? Using different architectures. Uhm, there is. Of course there's a cost of lack of reuse, lack of sharing of ideas in efficiency, but that cost is not always. Clearly visible within a team or even to leadership. So if you can find a way of quantifying the cost of misalignment. That usually is the fastest track to getting sponsorship for alignment. And we can help you with that if you want that. Say we've got a. UM services software services consultancy group that that does that just come that that kind of analysis 'cause it's not always easy to. To. Identify and quantify some of these process costs. But they've got more experience in it than I have. OK, we got 5 minutes. Yeah, there is another one. How much of a challenge is it to retroactively integrate system link into legacy test systems? A system link I will caveat not the expert system link is a dumb. I believe you need a system link client and so you depending on that system that that test system is doing. If it's just spitting out data and some kind of OPC server, or if it's legacy, it's probably just some kind of. Serial thing coming out then we should be. We can just take that data and create a data plug in and plug it into system link. It doesn't have to be just on your you know your newest line. That client can run separately to your existing code, it so it doesn't you didn't. You're not going to be writing code for that, as long as there's some kind of data handshake that we can work through in an intelligible way. I will say that. That in in general, if you do have all your cut all your data coming out in different ways, standardizing on, that's probably going to be a thing you're going to be looking into anyway. I see, you know, we all love these buzzwords of digital transformation and and leveraging digital first or digitization technology. But the foundation of all of that is having a unified or standardized. Data frame would data set coming from your different test stations that you can manage and find, and then you can use that data in all kinds of ways so. Uhm yeah, data standardization is a foundation anyway, so we should. We let we could talk about that and then system link is then one of those digitalization technologies you could utilize it for. There will be lots of others which will generate which will justify that investment in standardization. OK, I think we had. OK, there's a follow up question on that. Let me brief. There's a follow up from that last one. Let me rephrase. Is system link capable of supporting existing test systems without code redesign? Yes, I yes it is the the system link is as long as the as long as the existing code is publishing its data somehow. System Lincoln in take that data. I could pass you on to one of our system link experts on exactly how to do that, but you don't have to touch your existing code as far as I understand. Sounds good I think come I think we have just a couple minutes remaining grant so maybe we can just wrap up. Yeah. OK, well thank you so much. And Caitlin, I'm sorry you didn't get to answer any questions yourself. You kind of fell into this fell into this role. Kaylan's make my counterpart from semiconductor. So next time you get all the semiconductor question I will ask you all the semiconductor questions and this time I got all the electronics ones. Sounds good, but. But thank you, thank you so much everyone for attending. We run these periodically and thank you for your your questions as well. If you have any further questions please do contact me. My name is Graham Greene or graham.green@ni.com. Send me an email or reply back. There will be a follow up email that comes with comes back with this. Webcast, we can follow up and contact them somewhere and I otherwise I'll say thank you so much. Oh yeah, really quickly. There's also gonna be a survey. I don't know if that's coming from Kelly or if that'll be an automated email, but please feel free to take this survey. Let us know what was interesting, what was useful. If you have any other questions just to make sure we're making sure this is as useful as we possibly can make it. This is perfect. No, no, don't don't just feel free to take the survey. You must take the survey. We love surveys week and that means that these can be better and better so. Excellent, well thank you so much for your time this afternoon. This evening and this morning. In fact, you guys are in the US and and have a great rest of your day. Thanks bye. Thanks.

Like all software, your test applications have a lifecycle that gets harder to manage as the complexity and scale of tasks increase. And, like most companies, your test engineering team is most likely short on time and resources.

By adopting modern software development processes and standardizing your software development tools, test architectures, and proficiency programs these challenges can not only be mitigated but overcome to a level where the output of your team becomes a competitive advantage.

Effective standardization initiatives have been shown to:
- Decrease NPI development schedules therefore accelerating time to market
- Improve tester quality and stability by minimizing errors and downtime, thus reducing product and manufacturing costs.
- Reduce maintenance burden to free up resources and lower sustaining costs

NI provides a complete software, service and instrumentation platform that has proven itself in tens of thousands of deployments worldwide as they have gone through successful standardization initiatives.

This webinar will explain the stages of standardization and lay out the implementation steps to get there before explaining how NI can assist you in the journey. We shall discuss industry best practice and case studies from Philips and GN Audio along the way to give tangible examples to the theory.