Hello, everyone. Welcome to another Laura Alliance Technical webinar. Thank you so much for being here with us today. We're going to just give it another few seconds to let everyone get connected, but we are going to get started on time here as we do have a lot of great information for you today. So we'll just give it another few seconds and we'll go ahead and get started. Alrighty. Hi, everyone. I'm Megan Leonard, Global program manager for the Laura Alliance. And thank you for joining our Laura Alliance webinar today, which we are focusing on Laura Wan, payload Codec API, so another great webinar in our technical series. Before we get started, just a couple quick notes for you all. If you're having any technical issues during this webinar, usually refreshing your screen, which you can do so by pressing. A5 On your keyboard typically solves any technical issues you might be having. There's also a help menu bar at the bottom of your screen and you will see a help box within that menu bar, so that's also a great resource if you're having any issues during today's webinar. The other important box on your screen is the box labeled Q&A. So if you have any questions for our presenter throughout the webinar, you can go ahead and just type in your questions into that Q&A box and press submit and we'll receive the question on the back end here. And then we'll answer all of those questions at the end of today's webinar. So please be utilizing it anytime throughout today's webinar to just type your question in. And submit it to our speaker. Lastly, just a reminder that these webinars are available on demand after today's webinars. So it's about 24 hours and then it becomes available on demand and you can access the on demand version using the same link you use to join today's webinar. So with that I want to, I want to get into today's topic. Our speaker Mustafa is from one of our Laura Alliance member companies activity and he's also the chair. What's the payload codec API task force within the Lora Alliance? So I'm just going to go ahead and hand over to him now and so take it away. Thank you Megan for the notes and the introduction and hello everyone. Before starting I would like to thank you all for joining and attending a new technical webinar from Laura Lance. So as mentioned Megan, I'm most of Abraham, a software development engineer at activity which is a member of Laura Lyons and the Chair of the Task Force of Payload Codec API which will be presented in this webinar. Let's start from an outline of what the the webinar will contain today. So this is an outline starting from a small a small introduction to briefly present an overview of the topic to be more familiar with with what we represent, then presenting what's the problem in the current use of of codecs and the proposed and provided solution by the Technical Committee under Law Alliance. To be more clear for attendees, we will go through some points to learn more about the API structure. The metadata, the requirements and recommendations when following this standard, which are more explained in details in the specification published by the Laurel Lines, and you will all have access and reference link to this specification at the end of this week. Let's start from the introduction. The communication between any Lorawan device and application server normally must be translated, so it must be translated. And the codec itself it's. It's the piece of code which is responsible to translate it, or to decode an uplink downlink, and to to encode downlinks also for a device or a group of devices. To to clarify more, what's an opening, what's the downlink and why it's echoed? It is needed because the data to be transmitted by Laurel one devices or applications normally sent as an encrypted data or send as just somebody. It's so it needs to be decoded to be more useful on the server or even to to a human to be readable in a readable format for the server or human. And this the same process. They also needed when when a human or server needs to configure some device to communicate with the device and configure it in a way or another. So this these data must be also translated and we have, we said encoded to our inner format as each device has its specification and its technical characteristics in terms of hardware or any other specific specific specification. Each each device has its own way of communication in a chain of bytes then. That's why each device normally needs a dedicated product to do the translation process. So why? Why this? Why this the current use? Why, why, why now? This one to one relationship between the device, the Loreman device and the codec is not optimal or it needs some structuring. Let's see. This is the problem now. So starting from describing the process currently to to conclude the problem. So in the encoding and decoding process of application payloads and here we mean by payload it's the and here in the next slides we need by payloads the data communicated between device and application server in the two ways. And so each time a device, application or is introduced or modified. So introduced with this, it's a specific rules, it's specific characteristics or modified we mean, we mean updated in a way with no, no backward compatibility. So there's two ways to support this device. When in that case, either the application server vendors need to develop a codec for this device or the device maker need to develop a product for each application server vendor already exists on the. One community and another case also. Each time a new application server is introduced, there's two ways to support this, the codecs. Either the application server vendor developed the product for all the existing devices according to its API, it's only API. Or each device maker must develop a product for the for this new application server event. And this really creates a huge friction for the on boarding. Devices and applications several platforms. Let's just go through a simple example and it's really simple as we have here in this example 33 application servers and each application server has its own API. You can see API XY and Z and its own way of writing decoders. And we have also three device makers AB and C. Assuming that each device Maker has has one device. So he he first need to create a codec or implement a codec following the API of each application server. So he needs to implement 3 codecs, one per per application server. And the same process must be done from the device maker B. For one device he needs to create three products. And the device maker C, the same process. Here is a an overview of the whole of the of the whole case here in in the case of the application servers, 3 device makes it with one device. So it's really a small example. It's a simple example that shows only three devices and only three service servers in the field. You can imagine how the complexity would be strongly increased in case of massive Lorawan deployments and the real. This is really not this one. There's much more than the than than that as application servers and device makers. Adding to that, here we are assuming that each device maker has one device. And also this is not the real case in the real life because each device maker normally manufacturers at least a minimum of three to four devices and some will have more than 20. So the complexity will be. Huge on the ecosystem. What? What's the solution provided by by the Laura Alliance Technical Committee? We have said that in order to to to face the time and effort complexity, the Lora alliance defines a standard codec API that can be adopted by both the device makers and application servers, which is which was the goal of the specification. So why we are proposing to standardize the product because. When? When we have no, if we take the case of the problem, we'll take it with we have any application server in new application server or device maker. So the device maker must develop the product implementation only one time and thus it will be compatible with all the servers that follow the Laura Alliance Standard Product API and this allowed to easily integrate any Laura. Once device into the compatible platforms. That that uses this standard. So let's go back to the same example that we we have taken before. But now you can see that all the application servers for example use the same API which is the standard API that we are providing here in this specification. And also the device makers AB and C each one has one device, only one device. So before that that was the case each device maker want to create. Or implement the three products for for all servers. But now instead of having API XY and Z, the device maker A will have to implement only one codec and it will be compatible with all the servers. And the same for device maker be device Maker C and I will repeat one more time this. This is just a simple example, it's it shows a very simplified case as much as the as much as the. Number of devices and application servers that are compatible with Laura one increases as this standardization would simplify the process and reduce the complexity and with which would have an effect on the reducing the complexity and friction while onboarding new devices or even new application servers. So now as we know about the advantages of this standard, let's go through its structure, how, how it's implemented, how it would be implemented. The metadata and requirements recommendation as you said this slide that shows. In fact, they both actual the API. This the Laura one payload coding API doesn't just. Trying to it's not just trying to to standardize the code, the the code that will be interpreted, but even the metadata, which is the, which is the information used for identifying the code itself, the vendor of the device, the version and etc. So and the JavaScript code for for sure. The the problem programming language chosen this codec was realistically because it's really lightweight, interpreted and can be compiled just in time with first class functions. There's more information details about the that in the specification. As we choose the as we choose the JavaScript ES-5 as it's simple and widely supported in most communities and interpreted by. By most browsers. So in this section we'll describe the three JavaScript functions that are that the code may declare to perform encoding decoding task. Here in red we have the decoder link which is the mandatory function that without this function. Developed and the encode Downing, decode, Downing are optional and are only military one. This one the device supports down it, so every codec function between these three is defined by its name, input and output. And in all cases the input and output are JavaScript objects. This to allow let's say future evolution, evolution of this API if you want to add some more fields in the input. And output some more properties. This will not create any breaking change. Uh. So let's go start by the uplink decode. The function signature. Each function, as we said, is described by its input and output. The input and output are JavaScript objects, and inside there are three properties for decode uplink bytes, F port and receive time. So the the bytes is an array. All these they are mandatory to to to decode an uplink. This is an array of bytes which is the payload to be decoded. The payload sent by the device to the application server to be decoded to a human readable format whereby it is represented with an integer between 0 and 255. The export which is the port field. This one is always also mandatory. Normally there is some limited ranges for not to be used As for others can be used. Freely for any purpose, for example to specify the type of message that that sent from the device. And you can find on the Laura Line site the. The limitations on using which numbers of effort they receive time, which is the the the uplink message time. It's. It's normally a JavaScript data data object and this is also mandatory. In the output object, we have data, errors and volumes. Actually, when the decoding process done correctly successfully, the data will be mandatory because it's the decoded payload and it's really an open JavaScript object, so you can see. For example in the example here it's temperature 41. You can have any anything inside this object data depends on the device specification and what the device can be used to. Errors. This one is mandatory. Is mandatory only when failed and we we said failed. Failed failure means that they provided payload cannot be decoded or by some reason why cannot be decoded. By some reason the data were wrongly arrived to the codec either according to an issue in the device of. And the device or to an issue of communication between the device and the application servers. So it's a list of text, simple text describing the errors that has been happened during the decoding process. And when when the errors will be there the the data object will disappear because the data. Cannot be decoded, so this is optional and mandatory only if it. Warnings One is an optional, always an optional field or property and it gives a list of warnings for some issues maybe happened during the decoding process but didn't prevent the product from giving an output. Even if it can be used for an alert for example, and this is the example. So you can see that the input is the bytes encoded which is encrypted and the effort and the time then these according to some specification of the device. We have created the codec with the decoder link that will be decoded to say that the temperature now is 41% it to the application itself. And you can see the warning here is used as an adult so nothing the data already decoded. Nothing wrong, but there is another that the temperature exceeds 40 degrees. That depends on the device. So the dominant encode function signature in the downlink it's the it's reversed. So we we are sending data from the server to the device to configure it for example. So as you already mentioned, contrary to the decode Applink function, the implementation of this function is only mandatory when a device support downlinks. So for sure the data in the input always mandatory as it's the only source of configuration or of information to be sent to the device and then output there is bytes, effort, errors, warnings, bytes which is an array of encoded data represented by by bytes. Again an integer between 0 and 265 and this is only mandatory 1 success and the effort will be also mandated. There is errors and this one is mandatory only when failed. So as we have already mentioned in the decoder app link, if the decoder for some reason failed to fail to fail to to encode some data, it will give a list. Of errors to the user, to the developer, to the application server dependency and the warnings, the list of alerts. List of errors that didn't prevent the connection encoding, the payload. Here's an example. So here we are saying we are seeing that we need to set the temperature threshold to 40. The encode downlink will encode this to be understood to be understood by the device. So it sends just an array of bytes and on this. And the device will be manufactured in a way that if I receive these bytes and this effort, then I have to configure this part of hardware. Uh, then the last function will be the downlink decode. You may ask why we have to decode the downlink as we are already encoding it. We need just to send data from applications to device. But this is an optional function but may present when device also support Downing. But also it makes it easier to monitor what's happening, what's we are sending. From the application server side, monitor the logs, for example statistics on the downlink that has been sent to the device. So it will. Here we it's the inverse of that one. So that one was the encoding. Here it's the decoding of the downlink, the bytes that must be sent to the device, the F port. And the receive time. So and this is the. These are also already explained in in previous slides and the the output will be data that the data that I have to send to device to set this threshold. So you can see the errors and warnings. Also you can see here's the decode Downing. If I send this 116 with F416 it will say that I have to send set temperature threshold to 40 to in order to to receive this this one on on the device side. This is really. If you if you compare these two examples it's the. So here we have sending this one, we have received the one that will be sent to device and here's the decode. Decode of the downlink and this is useful only for monitoring the logs or or anything. Going back to this slide, we said that the API is not only standardizing the code, they just could probably also the metadata. So what's the metadata? The codec metadata includes the main file that has some properties, some fields of information, information that are needed to identify the code and some also additional methods, additional metadata as examples and data description that are recommended. So what's the main file? Actually it's a Jason file and it has the information that helped using the codec with different runtime environment. This one must be delivered alongside the must be alongside the JavaScript code to simplify the distribution and the creation of the codec with any lower one platform compatible with this standard API. So what's inside this file? This Jason file? There is some fields and most of them are important for sure, the codec ID. It's an identifier that defined by coding developer, but it's unique, it's 4 hexadecimal character and it's unique. So we will have on the lot of alliance side at least of all the codec IDs and in order to identify any codec with its ID directly as we have already for the vendor IDs. For example, any vendor vendor that that's a member of Laura Lyons, he has an idea unique idea on the Laura Alliance side and it's. Listed on our side this source URL, the social it's URL refers to the source of the code. It might be a GitHub repo and you're able to get GitHub repository URL to the website of the company that's that manufactured this device or that create this codec. It depends. Vendor ID, the identifier. Give him to the vendor by the Lola alliance. It's also for establishment characters, and there's already some. There are already some some. Endorse that. That has ideas on the lot of alliance, so all the Members, the vendors that are members having already an idea. The mark is the E Mac block. This one is only mandatory if there is no vendor ID. So the Mac used as an alternative alternative of the vendor ideas. Because if some if if vendor device needs to use this specification, this standardization in order to ease that they use. Is called on all the loan platforms and he is not a member of Laura Alliance. He will not have a vendor ID, so he can use the Mac instead. The version and it's the version of the codec because it follows the semantic versioning. And normally this is important because if in the future the product must be maintained or the even the device has been modified, the codec must be updated. So the version is really important to be used in the metadata, the main property it refers to the file that has the three main functions decode, uplink. People Downing include Downing in their project, so it refers just the reference to where these what, what's the name of of the file that's delivered with this JSON file that has the code. Check version. It's the version of this spec followed when creating the code. Now we have only the 100, the first version of the specification. But maybe in the future we need to add some properties we need to add to do some modifications on the specifications, so the version will be increased and thus the spec version is really an important field here in order to. The application server to know which for an application security they can say OK, we support this version of the spec, the standard codec on this version of the spec. And the codec developer also need to know the spec version to that supported by the application server name. This one is an optional one, it's any commercial name for the codec used. It can be used for I don't know. Or. Just in commercial name. Coded hit here. It's an object, it's just a JavaScript object and it's it's optional normally and that's it's mandatory normally, but it describes what's optional. Functionalities are already supported by this project because as we said, the downlink is not supported by all device or by all products. So here it's a boolean, said Okay downlink true. I support downlink in this colleague examples and Jason's schemas. We will see in the later. Slides later in the in the next slides to the what's? What are the examples and what are the relations comments and these are optional, so in the context there is a property said if they are supported or not. So also as we said, there's additional metadata which is optional. In order to help the the user of the code or even the application server. The the the developer may provide a list of examples. Examples of what examples of the code of the bytes to be decoded and what's the expected output. Normally these examples will be used by the Lorawan platforms to to suggest. For example, some payloads to the user when testing the code. They are also useful for to assess the code implementation, codec, memory consumption, time execution to compute the coverage to give the quality of the of the product or the score of the codec depends on the coverage. So these examples are delivered alongside the metadata with an examples to Jason. And there's three types of these examples, uplink, downlink, encode, downlink decode. The uplink will give an example of the input as a bytes and effort and the expected output data. That only in code the input will be the data and the expected output as a bytes array and effort, and the downlink decode the same as the as an input it will be bytes and effort and the output will be data. Another additional metadata will be the data description. That exception as as the we have already mentioned that the data that will be outputted from a product, it's really open. It's an object, an open object. So it's a decoded view of payloads in a JavaScript object that's open, which is by construction at JSON object. So there is a direct one to one mapping problem, but we don't know. We don't know what's inside, so in order to ease the integration with the platforms, a description of a structure is recommended to be provided using Jason Schema. And here we have chosen the chosen the the the draft version seven of the Jason Schema. Here are some samples of uplink and downlink examples. How it's structured giving the type of example description of the example input and output. So this is the expected output. If I give this input to the code, this is the expected output. This is really useful as example the downlink. In code examples the type is already. Downlink in code and the description input as a data as we said and output as a byte. And that these examples are not only used for successful decoding processes, but only to, but also to to give an idea of what are the expected output if, if I, if, if the application several, if the decoder the received. Wrong, wrong payload for example or a crashed payload. So this is an example of an errors also. So the example is not only giving the successful ones. You see you see here we are giving the bytes. And effort and we get the expected output will be invalid down in payload unknown ID you see. So here we have we we give an example of what an error can be presented. And also some samples of how how the data description must be provided. Here are two Jason schemas that describes the output of the decoded uplink and downlink. For sure, no no. That's exception for the encoded one, as the output is always an array of bytes and enough port. So you can see that the code Applink schema. We said that we may receive as an output when decoding an uplink, a temperature and humidity, a pulse counter. These are the three possibilities. There is false. No additional properties. These are only the properties that can be sent by this device. The decoded downlink. Also we have said that we can send to the device. We can send to the device these two types of data. Set a pulse counter threshold, set an alarm for example and no additional properties. So this is describing and giving more. Information to ease the integration on any application server. Also, there are some requirements, additional and required characteristics that must be taken into account when created any new product following this standard specification. For example the security for security we we have set. These are more detailed in the specification. So for security reasons and because the the correct are colleagues are executed directly on modern platforms, there are some limitations on what the developer can do inside this code. So the product itself should not use any external code that get dynamically used during runtime. This is for security reason another requirement. Will be the size of this clip according to some statistics that we have been done and. As activity is as one of the vendors and solution providers that are member of Laura Alliance and we we we we really used the Codex and there is also another another Members we have done some statistics and according to memory allocation limitations we had the. We had decided that the size of the conduct script should not exceed the 64 kilobytes and it's really enough for a project of advice or a group of devices. Therefore the application server vendors that must follow this that would. Would like to follow with this specification standard should support context scripts up to 64 kilobytes on Max. The libraries we have defined, OK, some predefined libraries should be defined but we have started with one library and this can be can be improved in the future versions of this specification. We have started with the node JS buffer with its version 12 in order to handle raw binary data and this really useful to to avoid any large polyfills and this also helps. To to not increase the codec size more and more for for now. The recommendations from the task force would be about the packaging and testing. So the packaging we have stated to we we recommend to to use the node package managers the NPM because it's the most widely used packaging system for JavaScript code and also when using NPM. This across defines the clear code layout that can be distributed the delivered independently using the developer preferred version control tool. More more details about that and the best practices to use the with MPM are described in the specification document also. For testing, actually the testing of the code is really an important process. Thus the colleague developer is highly recommended to test the product in most possible cases as well as error cases also, not only the successful ones. So the test process of the product is recommended to prove a minimum of 85% of test coverage. And we also recommend to use digest as a testing framework as it's easy to use and it already supports the coverage statement. And facilitate the the use case testing process. The product developer can benefit and that's why we have said that examples are optional but important the the the correct developer or the application server can benefit from the examples files by using just one normalized and already implemented JavaScript spec file in order to get all the payload examples inputs and the expected outputs. And thus we can do the test automatically and give the coverage of the whole test in one script. So the examples are really helpful for the application servers and for the user of the code. That was all about the standard API. You can find elephants to the full specification. If you need more details in the resource boxes also box also on your screen you can download today's slides also. And if you find that if there is any comment, any feedback, any comment, any feedback, even clarification are welcome. So do not hesitate to reach me out, me or Alper. And here's our direct contact emails. And I will have to make an again. Perfect. No problem. Thank you so much for all the great information and your presentation today. As Mustafa mentioned, there is a resources box on your screen, so that will take you directly to the link that you're seeing on the screen. And there's also a link to download to today's slide deck as well. If you're also interested in becoming a Laura Alliance member and joining the Laura Alliance ecosystem and getting some of those benefits that were mentioned today and there's also information there a link to get more information in that resources box on how to become a Laura Alliance member as well. As a reminder, you use that Q&A box that is on your screen. If you do have any questions, we are going to go ahead and go into the question and answer session. Now. I do believe we're having elper connect and join us and help moderate the Q&A. So let us get him connected, use that Q&A box to type in any questions you have and we're going to go ahead and get those answered right now. So just give us a moment here to. Regroup and we'll start the Q&A session shortly. Thanks everyone. Alright, great presentation, Mustafa. Thank you very much. And it seems like we have a very wide, wide audience today and they've already started putting their questions in the Q&A box. So if you haven't already done so, please go right ahead and and type your question in the Q&A box and here we go. All right. The first question Mustafa, is why is there no End Quote uplink? So actually the the uplink is sent from the device, it will be encoded in the device it's sent over there. So the the the only need for for this uplink is to decode it on the application server, as it's already encoded by the device, so there's no need to encode the uplink again, it's already encode. Right. Yeah, they're already binary encoded, so there's no need to further encode them. And then the next question, well, I think it's a statement, so let me read it. Usually application servers adapt to devices in the market and. Integrate one by one devices needed to support. But standardizing device protocol and application APS is the way to go for sure. So yeah, like I said, there's a statement which makes perfect sense. I cannot. If an add something to this statement, so even if it's 11 by 1. Supporting of devices, but this the integration of each one in this process would be easier and faster with this as therefore it will be will be compatible with all application servers. Yep. All right, the next question is why not both JavaScript and Python on app side and see on the end device side. Also encode uplink seems to be missing, which we talked about. Yeah, you covered. So it's the question about Python And Python. So actually we we have decided on JavaScript one which is more popular for this types among the ecosystem. But sure in the future when there is a demand for Python binding, we can add it to in the next versions of specification. But yeah, the rescript was the most popular in the application servers. And I think the question or the point about using C on the end device. That's not clear because the the end device already knows how to process the application payload. And and whatever language it's using, it can keep doing so there's really no codec API relevance on the enterprise side, right? Right. Alright, OK then the next question is, does the correct ID differentiate between different versions of the same codec, or are there different codecs for different applications? I thought the whole point of the standard was to have one codec for all applications. Actually the product ID would be added to the QR code as we have the as we have already mentioned in the specification in details. But to be added to the QR code as there's vendor ideas that are registered in the Lora Alliance community, there will be codified this. So yes, one codec can be used, one standard product can be used for all application servers, but no need to change the idea on every version. As the product when you have the product ID, maybe, maybe, maybe we can discuss it after, but maybe we have one one codec ID per version or we we add to the QR code the product ID with its version. But yes, you're right. Alright, and the next one is again about codec ID if if I have devices that are using different payload formats today. And I define multiple codecs and use multiple codec ID's yes for sure. So they they can be. There can be multiple codec codecs for one vendor, so it's not 1 to one relationship between vendor ID and product ID. Yeah, so so the single vendor would have a vendor would have a vendor ID and then they can have multiple codec ID's and depending on the application they can use a different codec ID, right? All right. And and what about dynamic schemas? In other words, when the content of your uplinks might not always be the same structure? I think here you are asking about the the Jason Schema which is recommended to be added on the project. And normally if you have the dynamic Jason schema you can there is an option you can you can just go. There's an option of having one of inside the schema definition you can have one off and inside an array you can put multiple schemas. So there's an option in that for that in the distance. Right. The next question is I understand how I submit the formulas of uplink and downlink packets. But how do I actually tell the LS? The network server? The decoder encoder code itself? Yes, actually normally when when provisioning any device. With you can link the device with the code by the codec ID and that's that's what was the the goal of having modified this system. Right. So do you go ahead. Sorry, I didn't mean to interrupt. No, that's OK. OK, I I was going to say the the the user who's provisioning the end device needs to provide the correct ID to the system like an application server. So the application server knows which codec to use with the with the traffic or the with the frames application payloads coming from that given device. Umm. And the next one is what is the feedback of other network server providers beside of activity, for example T and chip stack to implement this? Actually, we had the. We had the several, several contributors in this specification and this task force. One of them was the. If I remember the names Yohan from TM already there's some from. I don't remember actually the names of companies, but yes there's multiple providers that were were that were. Participating in the task force and then the specification. So we have the same, we are on the same. The same path. Right, right. So this is a loreal line standard and it's produced by the application tell correct API task force under the Technical Committee. And we had members from various parts of the ecosystem not only contributing but also approving the specification. So it's a joint work and. It's a lower alliance standard. OK. The next one is, well, thanks for presentation. Well, thank you for joining. One question, why using NPM for distribution and not a public API or repo that would allow integration at runtime and not only at build? And actually also this is like the one of JavaScript. This was the most popular in the ecosystem among the ecosystem and also it. It can be a let's say how the same structure for all the decision but but to to add to that the the the MPM packaging of the codec was the recommendation by the by the Reliance specification. So it can it's, it's not mandatory so it can be delivered in another way. But this is a recommendation to deliver it as in. Alright, and the next one is, will there be a codec repository available at the Alliance website? Yes, actually we have stated to have a GitHub repository for the Laurel Lance to write the codecs or even the the products with their IDs. That was the plan. OK and is there any code example for the codec available at GitHub? Or sorry I maybe I is there any production, maybe I mistake with the question because I I was answering on this question so they could help with. This simple the the sample or the example of the coded. Yes is not not not for, for for saving not a repository for saving all the but the the on the other side we will have the codec ID. Right the the the the Alliance actually manages the vendor ID's and the codec ID's are left to the vendors and there would be a simple codec for others to develop, but otherwise a full repo won't be hosted, right? It'll be left to the market to build it and provide it to the rest of the ecosystem. All right. And this one you already answered. We are working on providing a a code example. So the next one is. Hmm, using NPM would expose my decoder code to the public? What if I don't want to expose it? What if I don't want to expose it? You can deliver it to the application server by by any other. To the public, yes. So using NPM you can also you can also pack pack a code or the code in the metadata in a private one. No need to publish it on the NPM public when using the NPM package. There's a choice to have that right. Right. Right. Yeah. Then then you have to take care of how to deliver that to the target platform, but you can. Keep it away from the from the public eyes. Yeah, if you want to. Let's see. OK. Here's the next one. It seems to be that the end device makers do not have to change anything. This payload codec is more oriented to standardize the exchange of data between the Laura Gateway and the network server. Is this correct, is the question? No, no, no actually the the I, I will repeat just I will go back to to a previous slide, the example that we had. Did we see the? The device maker needs to develop to develop a codec for for each application server. So yes, there is there's an impact on the device makes after using this API. You will have only to decode 1 codec for all the servers that supports this standardization. There's an impact on the on the device makers also, not not the gateway and network. Right. But it's not the device that gets impacted, right? Yes, not device, but the device makers. Yes, yes the device maker if the if the if the codec is provided by the device maker, they would follow this API to provide their codec. And then if they develop their Kodak once according to the API then it can run anywhere using the same API but physical device and the software stack on the and device won't be affected. All right. OK. So the next one is, have you defined a list of services such as temperature, humidity, et cetera to homogenize the exploitation between different device makers? That is more about the semantics the talks. The ontology part of this, yes. Actually that wasn't the scope of this version of specification. Maybe in version two or in another specification that would complete. But yes, this specification was needed to open the to open the the the way. Or to open the path to to, to start doing that, to start standardizing even the as you have many standards of exposing data As for example Microsoft says the TTL for Google, there's another one. We we can we can add some standardized standard of exposing these data, but what this wasn't the scope of this task for for this specification it might it may be in the next versions or in other. Right. Yeah, I mean we we chose to go step by step and and we took the first step and what's been asked here is the next step. About standardizing how data is represented and there are some ongoing discussions around that, but the timeline is is is TBD. All right then, here's the next question I'm going to go. Yeah, OK. Usually in one uplink is more data transported like temperature, pressure and humidity. Is this covered in the uplink encoding? More better transported. And there is no uplink in coding. And we have already answered this one. So yeah, so there's decoding, but the decoding can handle decoding multiple different types of data, right? I mean a single uplink having three different information element, yes, the decoders can handle that as well. Yeah, definitely they can. Yeah. And the next question is something we just recently talked about. Is there a standard for decoder output on the horizon exempt temperatures always temperature and united Celsius? Yeah, that's a potential next step. And then the timeline is TVD and obviously just like with for anything else, we would highly encourage if there are Members in this audience or if they are considering to become a member would highly encourage them and come and join the application payload for the API task force. To be part of the conversations. Towards evolving, evolving the standard. And the next one is also comment. Python is more popular than JavaScript, so regarding the codex let let me jump on this one as well. Among the members who have participated in development, JavaScript was more popular if if this. Then the things that a Python binding should be provided as well. Same thing you know, very welcome to come and join the task force. Even the adaptive level members can join the task forces. And and as if and when we have demand for a Python binding, we can we can support it as well. OK, what device uses the downlink decode? Isn't this an end device? If so, it's most likely written in C does need decoder written in C? Yeah, I think yeah, this this same participant regarding how the end device does encoding and decoding is all an implementation running on the end device. It's totally isolated from the rest of the system. What we are concerned here is how the application server. Encodes and decodes the payload between binary and Jason. So yes, the end device might be using C. But still the the the the Kodak could be written in JavaScript because they are running on 2 distinct systems right? The one written in C could be running on the end device, but the one written in JavaScript would be running on the application server. And. Yeah, I mean there are different requirements in terms of the development language choice when it comes to running on their device versus running on the. Running on the application server side. All right. The next one is, will this be supported not only by LNS, the network server, but as well by the integrations like data, Cake, Utah. These are the applications, services, you know, Mustafa. That's the question actually. It it will be only supported by that and as after that it can be used. Is is there the result of this of this generalization can be used to to easily integrate with other with other platforms like that OK with dots. Only to the but but no, it's only for the NFL. Well, I mean actually the divided, it could be like the translated to Jason on the LNS or on the application server as well, right? Theoretically speaking or in some given implementation if the LS keeps pushing the binary to the application server. You know what? Nobody prevents anyone from running the codec on the application server as well. But if the if the binary to Jason translation has already been done on the LNS, then there is no need to further do this decoding on the application server. So at least in terms of our standard. One can run them on the network server, or on the application server, or in fact on any platform that might be outside their architecture as well. We don't really have any constraints. That's that's the flexibility we have. We have only a minute left, so let me. Take one more. I can take the one of the 32 because I think it's clarification. Also do you need to use the Jason schema or can you just put everything in the decoding function itself. The Jason Schema as as we said there is no for for the moment there is no standardizing of the the the data that that must be exposed as temperature or temp or something. So the Jason schema here to describe the output but it's not mandatory, it's optional. So you can put everything in the decoding. But that was too easy to to to make it more easy for the application server if there is a Jason schema that describes the data. Alright, so we have reached the end of the hour, so let's stop here. There are a few more questions that we couldn't get to and we can respond to them over e-mail. Well, thank you very much Mustafa for this insightful presentation and the answers and thank you our audience for contributing to this discussion as well. So with that, I think we can close today's session and everyone to have a great day. Thank you. All right. _1669747331277

Now available on-demand! In the absence of any standardization, each time a new device or Application Server is introduced, the Application Server vendor needs to develop a codec for the device/application or the device maker needs to develop a codec for the Application Server vendor.  

The LoRaWAN® Payload Codec API standardizes an API for the JavaScript codec of LoRaWAN devices, enabling adoption by both device makers and Application Server vendors. 

Any standard codec, that follows this standard API, provides the capability to decode uplinks/downlinks and to encode downlinks, allowing new LoRaWAN devices to be easily integrated into any compatible platform. 

Through this webinar, you will learn how to build a standard codec, focusing on its structure, limitations, and recommendations for optimization.