HighByte Office Hours

Transcript of the Spotlight on the UNS Client and Data Pipelines

Wrighter
37 min readNov 13, 2023

As part of the 4.0 Solutions community, I wanted to do my part in helping raise awareness of the most impressive companies in Industry 4.0. So, I created a word-for-word transcript of this edition of HighByte Office House and its spotlight on their UNS Client and Data Pipelines.

It took a while to eliminate all the errors in the automated transcript. But it is my hope that others interested in learning about HighByte now find and consume this content more easily.

If you are keen to learn more what HighByte is doing, I recommend you visit the HighByte website, where you can get all your questions answered!

The source video is here:

Transcript

Torey Penrod-Cambra: Let’s get started, and the first thing I’m going to do is get this recording for everyone so that we can watch this after the event. So first, some housekeeping items. As just mentioned, this broadcast will be recorded and shared with you in the next 24 hours, and any of the slides or images that you see during the course of this presentation are available on request. Simply email someone that you’re already working with here at HighByte or send us an email to info@highbyte.com, and we’ll get you what you need.

I’d also encourage you to use the Q&A tool to submit questions for the panellists. We had a record-setting number of people register for this office hours broadcast and submit great questions in advance, but we will try to get to as many of the Q&A questions after the event as we can. So, you know, we build in a little 15-minute buffer if anyone wants to stay later for what I’m calling the bonus Q&A. And then, please complete the brief exit survey at the end of the broadcast. It’s just two quick questions and shouldn’t take you more than 10 minutes.

Okay, so for the next 45 minutes, we’re going to demo the new capabilities in the latest release of HighByte Intelligence Hub, which is version 3.2. More importantly, discuss what those mean for you and for your business. We’re going to do that through explanations, reference architectures, and of course, live demos. So I’m going to kick us off first with a brief explanation of who HighByte is and what HighByte Intelligence Hub is. Then I’ll introduce our speakers, and then we’ll jump into the good stuff, which is best practices for building a Unified Namespace (UNS) with HighByte Intelligence Hub, as well as how to use the new UNS client and Pipelines Builder. As well, we’ll jump into Q&A.

So what I’m going to do now, before I introduce HighByte, is try to get an understanding of how familiar you already are with HighByte Intelligence Hub. So if you could just take a quick second to respond to this: I use it regularly, I use it on occasion, I don’t use it but I’m familiar with it, I’ve heard of it, or, of course, I’m brand new to the Intelligence Hub. Sorry, I had to scroll for that one. Oh, this is exciting. There’s a lot of new folks on here, which is great. Okay, well, it’s good then that I have a couple intro slides. So let me give you one more second. Okay, good mix. I’ll close the poll, and then you can go view the results anytime over there. Great, so now let’s jump back into the slides.

What is HighByte?

So, a quick introduction then to HighByte. At HighByte, we’re on a mission to be the global standard for industrial DataOps. What that means is providing the digital infrastructure, the data network that’s going to enable your digital transformation projects. The company was incorporated in 2018 here in Portland, Maine, and we were established by a founding team with more than 70 years of experience delivering industrial software solutions. Many of us at HighByte, no secret, are Kepware alumni. Since launching the product first in 2020, we now have customers in 18 countries and 15 vertical markets, including automotive, food and beverage, pharmaceuticals, oil and gas, pulp and paper, name it. We’re very fortunate to be working with some innovative, amazing industrial customers around the world, and we’re supported by a global network of distributors, system integrators, and tech partners that help us support these customers and bring these solutions to life in their environment.

What is HighByte Intelligence Hub?

So, what is HighByte Intelligence Hub? It’s a solution that we developed for companies who are really ready to scale. Our mission has always been to look at what was broken about the infrastructure, about the Purdue model of moving data up each layer of a stack, and instead shifting that paradigm to a hub and spoke model.

With HighByte Intelligence Hub, you can curate and merge machine transactional and time series data into an application and then develop a single payload that can be moved to the cloud or another application where it is actually in a format that is useful. HighByte Intelligence Hub enables you to model data, transform it, contextualize those information models in real time. We often say we’re focused on the data consumers and the use cases, so this isn’t just about data access; this is about getting the data where it needs to go in the format that’s ready to use. What’s unique about HighByte Intelligence that you’ll see today is it’s truly a collaborative environment for both OT and IT. So, on the OT side, that means OT connectivity. It’s edge native, low code/no code interface, and on the IT side, that means containerization and all of the enterprise administration features that IT would really come to expect in an enterprise-grade DataOps solution like HighByte Intelligence Hub.

So now let me click this off. I know I ran through that quickly, but you’ll get the chance to see a lot more today. I’m going to introduce our speakers now. So, Jeff, this is your first webinar with HighByte. I’m so excited to have you on. Jeff is a product manager focused on product positioning, technology partnerships, and reference architectures. He’s an experienced technologist and master of metaphors who brings more than a decade in technology leadership roles in the automotive and discrete manufacturing industries to his role here at HighByte. Aron, of course, is our chief technology officer focused on guiding the company’s technology and product strategy through R&D, technical evangelism, and supporting customer success, and thankfully agreeing to do webinars like this one. Aron has more than 15 years of experience in industrial technology.

Aron Semle: More and more pressure. We’re up for it!

Jeffrey Schroeder: Very formal!

Torey Penrod-Cambra: So I say let’s jump in. I’m going to go right to the first question. Hopefully everyone can see that ok. Feel free to use the emojis below. I love that. Ok, great. First question — UNS. Let’s just get right into it. I love this question from Roger because it’s so honest.

I’d like to see the UNS be more clearly defined. It would go a long way in controlling expectations and creating a community based on a common understanding of key terms. How do you define UNS?

And Jeff, I think I’m going to hand this one to you. But before I do that, I just want to say, obviously Walker Reynolds has evangelized UNS so much, and I feel like he’s the Godfather of the UNS term. So, you know, full thanks for him for establishing this term and then letting the world run with it. And so with that, Jeff, I guess I’ll ask you, how do we define UNS? How do you define it?”

Thanks, Torey. I think this is the perfect question to kick off the webinar today, but also it’s a huge theme in this particular release. So at a high level, UNS is essentially a pub-sub architecture. It normalizes all the disparate data representations that one finds among their industrial data sources, which tend to be very heterogeneous. Essentially, it allows us to consume all those disparate data sources holistically, intuitively, and consistently. It is a consolidated abstracted structure by which all business applications are able to consume industrial data. This structure in data consisting of well-defined topics and payloads should represent an organization in the way that consumes information. So I find that defining UNS theoretically can get abstract very quickly, and so I often find that it’s easier to grasp UNS by understanding the problems that it solves and more importantly how industrial data exists in the real world.

I’m just going to pull up a slide here to see this. So the first thing is you essentially need some form of brokering technology for client nodes to publish and subscribe to. And then within there, those nodes can publish to a certain topic structure. And the topic namespace could essentially consist of the asset hierarchy. That could be done towards the use case. But the essence needs to somewhat model — it needs to mirror or resemble — how an organization is set up as well as the needs of the consumer itself. And when a lot of people jump into UNS initially, they often fixate on telemetry data, namely PLC nodes. And once they have enough PLC nodes, it might make sense to have a nice clean topic structure to organize and differentiate nodes and make them browsable. But really, when you think about if you’re trying to make operational improvements or if you’re really trying to understand what is happening in your operation, PLC is essentially assets. They don’t really have that much context. They’re really just a series of sensors and actuators running on a really precise schedule. But from the perspective of maybe a process engineer or a plant engineer, it could tell you how many times that thing cycled or maybe what part program it’s running. But if you want to consider things like what lot of material it is running or what operator is running this or which work order and such. That information often lives in in other systems. And so in in manufacturing or really in any sort of industrial enterprise, there’s often much more than controllers, there’s much more than Telemetry data. And as we know, we all have tons of different solutions and such to address these. We often find though even working with telemetry data, the higher you go up in a stack, the interoperability challenges or the protocols or how you make things work evolve very quickly. So essentially, UNS doesn’t have to consist of just PLC data. It can have MES data, it can have QMS data, it could have things from the ERP layer or can involve cloud services and such. When we start involving these other nodes, the interoperability challenges become a little bit more challenging. We all know that PLCs have disparate namespaces and no PLC programmer programs like the next, but this problem is magnified significantly when we start involving applications. When we want to interface with applications, their interfaces and the way that they represent data and the way that they exchange data is fundamentally different than the way telemetry is. With telemetry, you ask a data source: here’s a name and it gives you back a value. When you start thinking about ERP systems or MES systems, you have transactional data sources that might have a request body or a response body. And those things might require certain parsing. Or a lot of applications are interfaces that are thinly wrapped around a database or some backend service. So oftentimes, while our intent might be to get data, let’s say work order information or maybe I want to publish some sort of production record, oftentimes that intent might actually require a sequence or a chain of many API calls to do what we want to do. Additionally, when we start thinking about historical data in a lot of process or continuous Industries, process historians aren’t necessarily just a time series technology, but in many cases they’re also a data acquisition technology. There are business processes tied to those types of things. And so while UNS isn’t necessarily a persistence technology for querying historical information, it often pulls from historical information. You can put things like arrays in there or you might put KPIs in there and such. When you start working with historical data stores, you might have to consider things like indexing, arraying, and buffering and such. So when you accept all these problems as they are, it becomes clear that there’s essentially a need for engineering, and we call this data engineering or DataOps. It basically provides tooling sets to deal with these broader sets of problems. And last, the final capability that I touch upon which is necessary for a UNS is visualization. So if you think about a file browser or file finder, one does not just use their computer. You don’t browse all the files on your PC or your internet using let’s say the file browser in Excel or something like that. You use a general tool to be able to discover data, and so you often need some sort of tool to be able to visualize that. If you have 200 nodes all publishing that eventually make up that entire namespace, it can be very hard to keep track of what’s there. So the visualizations are critical. So one of the key themes in this particular release that we have worked very hard on — and it’s a culmination of subsequent releases — is we now have a full-blown capability for brokering. We now have full-blown capability to help handle all the data engineering workloads, and we now have a full-blown namespace visualization embedded into the product.

Torey Penrod-Cambra: Thank you. That was so helpful, Jeff. That’s a really good overview of UNS, from both a high level and then much more specific about how we define and how we look at an entire UNS solution. I’m going to jump back into the next question. It’s actually many questions. So like I said, you guys really impressed so we got a lot of questions on UNS. We’re not going to be able to answer them all today, but hopefully we’ll get into a lot of them. So:

“How does Sparkplug “auto discovery” work when HighByte Intelligence Hub is between devices/gateways and MQTT broker?

“Wondering where HighByte might be able to help us build the UNS in our current setup?”

“I’d like to understand how to get live data from machines using UNS and data pipelines.”

“OPC/MQTT connectivity to ingest data to the UNS”

I would like to know how HighByte can help us create a UNS and get that structure back to Ignition for MQTT transmission.”

So this is just a handful. Thank you to everyone for contributing all of these. I think this is a good time to turn it over to Aaron and maybe show us some of these answers. Does that work for you?

Aron Semle: Yes, I got this. As long as I can find the screen. The thing is, my light in here is flickering, so if I go lights out, I’ll still be online. You won’t be able to see me, but we’re going to keep going. So all right, I’ll start with a really quick slide. What I’m going to demo is using HighByte to build a factory-level UNS around use cases, and I’ll describe that as I’m doing it. MQTT is the core technology, and then we’ll pause, we’ll go into some other questions. Then I’ll show pipelines. There’s a lot of questions around how you use pipelines, and the quick answer is pipelines are like Edge ETL. So I’ll use pipelines in one of the use cases where I want to mix MES and machine data and publish that in the UNS. And then what I’m going to do is actually spin up what we’ll call an Enterprise UNS where I’m going to take part of my namespace, and I’m going to push it up to the cloud. Let’s say a part that’s like globally required, so all factories must implement OEE. So I’m going to push that use case up to the cloud broker under its own namespace or topic structure. And then I’m going to actually integrate that with AWS SiteWise on the AWS side and integrate it with Azure Blob, assuming we have time for all that. The other piece I’m just going to touch on very briefly is what Jeff had mentioned. I think UNS as a core, most commonly, is MQTT broker technology, report by exception, but in my opinion, there is a transactional component and a historical component to UNS, and over time, those will merge to be the complete UNS solution that won’t be purely based on MQTT. I think those are REST interfaces. And I’ll try to demo a brief part of the transactional part. The other part I’m just going to try to emphasize in here is architecturally this is a really common pattern for UNS. So factory-level UNS, Enterprise-level UNS, and in that design, there’s room to experiment in the factory with potential use cases that will never make it to the enterprise because you want to test a use case, deliver some data to line workers, see if that’s useful. If it actually has a return on investment and you can deploy it across the Enterprise like OE, for example, then it becomes an enterprise-level use case as a data model that’s enforced at the enterprise. But there’s this idea that the factory still needs some room to play and experiment. And as we discover things and we can share it across the Enterprise, we do.

Let’s jump into it. So, this is my factory-level UNS. What I’ve done in here is — and John’s on the call, you can’t see him, but he’ll smile when I say this — is start with the use case. I’m not going to build a UNS that’s like asset-driven ISA 95. You can do that. That’s a perfectly fine design. In my case, I’m going to start with a use case-driven design. So, I’m going to say I have an energy monitoring application, line status application, and OEE application that are use cases, and I’ve started by defining what those use cases require for data. So, if you’re not familiar with HighByte, modeling is the ability to define a standard schema for a use case or a machine. So, if my energy monitoring use case has things like power, kilowatts, temperature, RPMs, I could enforce data types and whether these are required fields, but I’ve left it pretty open. My OEE use case is really weak at the moment. I’m just doing counts. I’m not doing quality or uptime in this, but really simple, right? And it says that regardless of the equipment you have in the factory, regardless of the machine, you need to fill in this data. So, I start with that. In my case, all the data is going to come from Kepware. So, now I’m going to go in and look at where I’m sourcing the data, and that’s going to come from OPC. So, I’m going to create a quick connection, and I’m going to breeze through this. There’s a bunch of videos on the YouTube channel on how to do this, and there’s no security on my Kepware connection, so it’s pretty easy. And then I’m going to hit browse, and you can see down in here, I’ve tried to simulate a fertilizer factory. I’ve got crushers, mixers, and packer machines. So, in this case, I’m going to start with energy monitoring. I’m going to start with the crushers. So, assume these are all attached to the same PLC, but they’re across different lines. So, I’m just going to pull in the first one, return to inputs, and then in HighByte, I’m going to do a test read on that. This is pulling data from the OPC server. You can see I have some OPC tags, some system data I don’t really care about from Kepware, but then the actual tag data. So, I’ve sourced where my data is coming from. Now I want to start to build that local UNS. So, in HighByte you can turn on our broker by going under settings and enabling this. I have allowed anonymous login. It defaults to Port 1885. You do not need to use the HighByte broker to do what I’m doing here. Everything’s standard MQTT, i.e. v3.1 or V5. But in my case, I’m just going to use HighByte to do it. And to show that, to create my local UNS connection, I’m actually going to connect to that broker that’s being hosted by HighByte on 1885. So, if I wanted to change this to HiveMQ, EMQX or Mosquito, I would just change this connection to point there, and everything I’m showing you would work just as standard.

So, our intention is to give you as many options as possible, not to lock you into using just HighByte. So what I’m going to do now is I’m actually going to define my outputs on this broker. This is where I’m going to build the UNS. So, I’m going to have an energy output. So, we have a connection to the broker. This would be an output, which is essentially a topic, and I’m going to call it ‘UNS Energy.’ This is going to be a little weird for people that have never used HighByte, but I’m going to use what we call ‘Dynamic outputs.’ So when I go and build the instance of the machine for Crusher 1, I’m actually going to put that name in the namespace, so it ends up under its own topic. So that’s what that means. And while I’m in here, I’m just going to go out and build out the other ones as well because they’re really similar. So, I’m going to do the OEE one, its landing spot in the namespace, and then the last one, I believe, was line status, and I forget if MQTT is case-sensitive, but I’m just going to avoid it. So, all right, I’ve got my output connection with my outputs defined. Now I just need to build out my instances. Instances will take a model definition and actually fill it in. So, I’m going to create a new instance. I’m going to call it ‘Crusher 1 energy’ for now. I’m going to use the energy monitoring schema. And this is the magic of HighByte; we’ll pull the data together. So in here, I’m going to go browse into the OPC input that I had previously set up. And because I set up this demo and I’m trying to make life easy for myself, this is pretty much a one-to-one mapping. But you could mix data from multiple sources in a field. You could write some JavaScript. In fact, I think I did mess up in a way and made this a string. So, we’ll do a string to pull conversion here. And then if we test read this now, we get the data from Crusher 1 in the schema that we’ve defined for the energy monitoring use case. And the last part to do in HighByte is actually flow that data. So, we’re going to move it from, we’ll call this ‘energy’ to ‘UNS.’ What I’m going to do is grab the instance that I’ve defined that we just did the test read on, and I’m going to output that to the local UNS energy topic.

And before I turn that on, I’m going to pull out our new cool UNS client that’s baked into the product. So this is an MQTT client that can also read Sparkplug data. Any connection you define that’s MQTT or Sparkplug in HighByte, you can select here and then just pick the topic. We did this because I don’t want to answer the question of “What MQTT client should I use?” from customers. You can use anyone you want, but we’re going to bake one in so you can have it in the product. So here you can see there’s no data being published. As soon as I turn this on — I think I’m going to change it to ‘publish only changes when they occur’ — you’ll see that UNS gets built out. So UNS, the energy use case, and there is the data for Crusher energy 1. Now in my Kepware server, I actually have 3 crusher machines. I want to enable all 3 with this use case. So now I’m going to go back and show you how to use templating to do that really quickly. So the idea is to get one machine connected correctly, and then we’re going to scale it. The way we do that in HighByte is we use this concept of templating. So rather than call this ‘Line 1 Crusher 1,’ I’m just going to call it ‘Crushers.’ And everywhere in the topic that has an identifier, I’m going to use this templating syntax, and then these template parameters will get replaced. I’m going to provide defaults down here. Basically, when I do a test read on this, it’s going to take these defaults. It’s going to place them inside the OPC identifier, the address, and I get the same result. So this is now a templatized input. I can pass the parameters to it and read multiple crusher machines. And I’m actually going to drive this by templating the instance. So up here, this was called ‘Crusher 1.’ I’m just going to call it ‘Crusher energy.’ And up here is where I’m going to define those same template parameters but the actual ranges. So I have lines one, two, and three, and on each one of those lines, there’s just a single Crusher. So one, one, one. So this means that it’s going to match the pattern one to one, two to one and then pass those to the input. It’ll make a little more sense once I finish it. And then you need to pass the parameters to the actual input. So I parameterize that input, and I’m going to pass them here. This syntax is a little funky. We are going to provide — pretty soon, I think — a visual helper in the UI so it builds this for you. But essentially think about this as I’m passing the parameters to the input automatically. So I’m just going to copy this to each one of the inputs because it’s all going to look the same. The end result of this is once I hit ‘test read,’ I’m actually going to read three instances of the three crushers and get the results back as an array.

So if I wanted to, at this point, if I have a fourth or fifth crusher, I just update this pattern syntax to pull it in. This can also be dynamic, so we can pull this from a CSV file to externalize it or a database, etc. But the way to think about templating is it’s basically a really quick way to generate the three instances that I need. If you have hundreds of instances, it obviously saves you a bunch of time. The last thing I’m just going to do is control the instance name. You’ll see this in a second. All right, now what’s really cool is I’ve been doing all this work. If you remember, that flow is enabled. So if I jump to here, and this is going to look … we might have a typo.

Torey Penrod-Cambra: Yeah, I think actually someone in chat is watching very closely. Daniel pointed out you have a typo in your parameter that might cause an error.

Aron Semle: Which one? MAC? Oh yeah, good call. Man! Real-time troubleshooting. I love it. So if I clear this now, do the test read. Oh, oh. I’m officially off the rails. Oh yeah, look, I messed it up all the way through. If something doesn’t go wrong in a demo, it’s not a good demo! There we go.

Torey Penrod-Cambra: That’s funny. You just got some feedback in the chat. “Add auto-check to 3.3”.

Aron Semle: Yeah. So there you go. It’s under the energy use case. And again, if you had hundreds of crushers, the syntax is a little funny. We’re going to look to make that easier on users, but really quickly scale. We have customers that scale up to thousands. So that’s the mechanism to do that. So what I’m going to do is I’m going to cruise through and build out the other use cases real quick. I’m going to do that by copying.

Torey Penrod-Cambra: Even with your typo, Aron, you’re getting plenty of applause. So just keep going.

Aron Semle: My goodness, okay, look out for more typos! So in here, my address space is pretty simple. What I’m doing here is doing the same thing I did for the crusher machine, but I’m bringing on the packers and the mixing machines too. And then rather than manually build out the instances for the other use case, I’m just going to show you how to import them. Alright, cool. So I need to do the same thing. I need to build out the other instances, but what I’ve done is a hack — not a hack — but what’s really cool in HighByte is you can import-export the configuration as JSON. So I’m going to import — not a full project, a partial — and I’m going to go out to my demo environment where I just have the instances. And so that’s just a JSON file that contains the instance configuration. You can see I brought in the line status and the packer OEE use cases. And if I bring those in, this is the same thing. I’m not going to walk you guys through it on the webinar. What is cool about the line status is it’s bringing in the line status for the mixer, the packer, and the crusher. So if all three of those are good, then the line is good. So this is just some JavaScript logic we’re running on that. But if I do a test read, it’s the same deal where I’ve enabled the use case across all three lines. And the last piece is just to flow this back out. So OEE to UNS, I’m going to grab the packer OEE use case and output that to local UNS. Let’s go back to the UNS view so we can see this thing build in real time. Oh, so we have an error. So the one thing I didn’t do is in here, I’m trying to pull the PLC time from the PLC. I have not defined that as an input in HighByte. So what I can do is go to the OPC connection. This is pretty cool. Hit browse, go to server status, bring in current time and that becomes available as an input. I’ve already pulled that in, so that flow should restart and be good. All right, cool. Alright, one more flow. The other one was line status to UNS. And it’s the same thing. We’re just going to modify the output. Local UNS. And how are we doing on time, Tori? I can show the Sparkplug integration real quick.

Torey Penrod-Cambra: Yeah, we’re good.

Aron Semle: So look at this. This beautiful — it’s amazing. This UNS real data coming through and we have the data. We modeled it. And so this is an example of a use case-driven UNS. I showed you the template and showed you how to scale it. One of the questions was Sparkplug. So how I integrate this, i.e. get it back into Ignition with Sparkplug. So what I’m going to do is I’ll create another connection with Sparkplug. I’m going to connect it to the same broker. So that’s the broker HighByte’s running on: 1885. Sparkplug’s a little weird in terms of the address. It’s not as flexible in terms of the namespace. So there’s a default group. I’m just going to call that Portland as the default. What I’m going to do is create an output in here, and I’m just going to surface the OEE use case for now. But as a group node, I’m going to put OEE. And then for the device ID, I’m actually going to make that the name of the instances like I showed before. So I create that. I’ve got the output defined in connection. So what I’m going to do is go into the OEE use case, and I’m now going to decide to output that over Sparkplug as well. When I turn that on, if we go back to the client, you’ll see this client will decode Sparkplug data as well. So you can see this is the edge node. It’s DDATA. This is the device and this is the group. This is the edge node, and these are the devices in Sparkplug nomenclature. And then in here are the metrics that we’re publishing. So the good count, bad count, and all that. And I have Ignition set up, once I sign back in with their MQTT engine, and I have it set up to listen to the same broker. So with that setup, I can jump in the designer and go to the MQTT engine tab. And you’ll see I just deleted it, so it did a birth request, and you can see that same data is now available in Ignition, updating in real time. And the only last piece of that I’ll show is you can go back into HighByte, and if Ignition’s publishing Sparkplug — or anything Sparkplug — you can create an input that basically listens to everything. And similar to what Ignition’s doing, HighByte can take that Sparkplug data, convert it into JSON, and show you the real state in the same way. So you can use that to convert from Sparkplug back into JSON. That’s all I got. Local UNS. So the next step will be pipelines and global.

Torey Penrod-Cambra: That is awesome. Okay, good. Okay, so I’m going to jump in. It was an awesome demo. I’m calling this officially “Jeff and Aaron: the Lightning Round”. So this is like one-minute answers, maybe two, because I want to give enough time. Aaron, I think you have a short pipeline demo too, is that correct?

Aron Semle: Yes.

Torey Penrod-Cambra: Okay, let’s save some time for that. So I have four questions here, so you get one minute apiece. Good luck. First on

“Best practices and common pitfalls when creating a UNS that gathers both OT data as well as business data from MES, ERP, auto created reports and so on?”

Jeff, one minute, starting now!

Jeffrey Schroeder: So best practices, no surprise here: build a strategy around facilitating the particular use cases. Think about it as essentially an engineering problem. The other thing I would say is consider the nuances of the interfaces of these things. This helps with scalability over time so you could leverage, as Aron just showed, templating to make the configurations efficient, as well as you could parameterize those connections to essentially reuse them and dynamically read and write records. And when we talk about merging data or multiple different data sources, make sure that they’re merged together. The further that we try to construct relationships between data away from its origin, the harder it’s going to be. So, we say to merge it at the origin.

Torey Penrod-Cambra: Well said. I love this question:

How can small manufacturers, less than 500 employees, adopt a UNS?

Jeffrey Schroeder: With industrial DataOps, I would say in general with a digital transformation, strategy trumps size and budget. A smaller, focused, more agile team with a good strategy will accomplish significantly more than a large, unaligned team. We generally strive to make the cost of change within the product — so, for example, a configuration or an architecture — as close to zero as possible because when we get down the path of digital transformation or we start doing things, we learn things. What we want ultimately is a function of what we know. So being agile is important. And then, obviously, I would say, consider enablement. In this particular release, we have a knowledge base that’s out there, which has a lot of resources, not just about HighByte, but other things that it would connect to. And I would also say, network with industry peers, collaborate, don’t be afraid to ask questions and learn things.

Torey Penrod-Cambra: Perfect, okay. So first of all, thank you, Dennis, for that question, and then I have two questions back-to-back. They have some similarities to them. So the first one is:

Is there a guideline for building a UNS data model to be used by the end application or is it fine to create your own definition?

Jeffrey Schroeder: So again, start with the use case. Start with the target here. You know, a lot of people get hung up on ISA 95 or B2MML, and you know, I often ask, ‘What do you have that actually consumes that?’ I’d like to, I guess, since you mentioned metaphors or analogies or whatever, just imagine if we found Latin to be a very exciting language. But if it doesn’t have a lot of adoption in an organization, ultimately it becomes a source of friction. So, let’s consider how we are consuming data and model it according to the specific needs. If you do find that there are models or standards that exist in the market and they work for you, leverage those things. If you don’t find that, reinventing the wheel is completely necessary but, again, try to get as much value out of how you’re modeling data as opposed to modeling it for model’s sake. Don’t adopt standards just academically.

Torey Penrod-Cambra: Okay. So since you brought up ISA 95, this is the last question I’ll ask before we do a quick poll and then move into the final demo.

The plant floor still follows the ISA 95 pyramid structure. Can the UNS run in parallel without dumping this existing structure?

And I feel like we’ve answered this in bits and pieces so far, but Jeff, maybe a quick comment, and Aaron, feel free to weigh in as well.

Jeffrey Schroeder: So, ISA 95 comes up. It’s a massive standard. A lot of people talk about organizing hierarchy according to that. But I would say that’s less than probably half a percent of the overall spec. It’s a very small sliver of it. I would say, though, that anytime that you’re dealing with things that involve stable technologies like in a plant and such, there are really no big things. Generally, digital transformation is some form of a hybrid on there, and so you have to coexist with things for a while. One thing that’s absolutely incredible about the UNS is that because it breaks down some of those point-to-point paradigms, it’s very feasible and it lends itself well to coexisting with legacy architectures, such that you might find that you might displace legacy architecture over time, or you might find that there are some merits to some of it that you might actually preserve over time and coexist with.

Torey Penrod-Cambra: All right we’ve told you a lot about UNS. Maybe now it’s time for you to tell us — if I can get my mouse to work. There we go. Okay so I’m launching a poll. The question is:

How critical is the unified namespace to your digital transformation?

· Critical

· Important

· Evaluating now

· Unsure

· Not relevant

If there’s a lot of “Not relevants”, we’re really sorry for the last 40 minutes of your day! There we go. Okay some responses are coming in now which is great. Awesome. Okay, this is a really interesting mix. A big range. A lot reporting critical, important or about a third for evaluating now. Great thank you for taking the time to respond to that. Let me go ahead and close this and with that I’m not even going to pull up a slide to ask this because I remember exactly who asked. It was Walker Reynolds. He asked us the question:

What is your pipelines elevator pitch, you know in 30 seconds?

So Aron I’m going to hand that to you and maybe you can answer it and then just go straight into the to the ‘how.’

Aron Semle: Yeah, so 30-second pitch. I’ll try to share my screen while I’m doing this. Pipelines in HighByte are edge-based ETL, right? So, there are use cases where you can’t just, like I showed you, bring some data together from OPC, model it, and send it up. It’s a little more complex than that. Let me jump into the demo. Keep me honest on time here, Torey, to show you one of those. So, under line status, we have an order number that’s coming from the line, right? So, we have an MES system that has that order number in it that has additional information about the order, and we would like to get that into the UNS. So, pipelines are a great way to do that. So, what I’m going to do is I’m going to create a SQL connection. I’m just going to call it MES. On my local box, I’ve got Microsoft SQL Server, and in there, I’ve got an orders table, so I’ve pre-created this and seeded it with some data. So, I’m going to select everything from — I’ll call this order info. Test that input, and you’ll see for each order, there’s an order number, customer ID, the order count, start date, end date, some stuff that’s MES/ERP-ish. So, what I’m going to do is parameterize this to say where order number is equal to. Just like we did with OPC, I’ll parameterize it and I’m going to turn on templating to provide a default. So, now, when we test-read this, it’s going to pass in that order number and should just return a single result. It’s that order number, and there’s where I created my data, I have some spacing issues in there, but let’s just ignore that. So now I have the lookup. This is the system I can go look that order number up in. What I want to do is orchestrate that. So, currently, my line status is coming in and going straight to the UNS. What I’m going to do is I’m going to send it to a pipeline instead, and I’ll use the order info pipeline. Once I turn this on, we’ll go look at what that looks like. So, if you haven’t seen it before, this is pipelines; the idea is the flow flows into this, for now. An event comes in, and then you can do some stuff with the event. A bunch of stuff; the stuff is growing. In this case, what I’m doing here is you can write custom JavaScript. In this example, I’m looking at the running flag. If we launch that again — I’m really bad at navigating off this. In here, under line status — oh, nothing’s being published because nothing’s running at the moment. But anyway, I’m looking to say, ‘Hey, if the machine’s off, just stop — don’t process the event. We’re done.’ The next one is a read stage that’s going out and calling that order info that we had specked out and passing in the order number. So this is how we’re mixing the data, doing the lookup, and then passing that on. I have another transformer that’s doing some light modeling, and then it’s sending it to the UNS. Eventually, we’ll have an event-based modeling stage in here as well, but we have buffer stages, some other stuff as we build this out. So, let me just go make sure this is enabled and go back to the UNS. I think when I went to test before, all my lines were off. But you can see now when run this through the pipeline, what I’ve done is I’ve augmented the data with the order information. So, if I extend this, not only does it have the order number, now it has the order info. And I haven’t modeled this; I’ve just sent it raw into the payload. It’s being injected to the UNS. So, that’s one example of pipelines, but basically, if you have more complex ETL-like capabilities you need to run on the data before it makes it to the UNS. It’s not just as simple as gather, model, send. That’s where pipelines come in handy, and people will probably know that are in the space. You need to mess with the data quite a bit often because it’s in really weird formats, and pipelines help you do that.

So, for the last part of this demo I’m going to spin up is sending this data up to a cloud. I’m going to call it a cloud-based UNS. So, what I’ve done is I’ve launched HighByte in a container on Docker desktop. It’s also hosting a broker on Port 1888 and exposing that to the system. So, I’m going to go in here and create my Enterprise UNS connection, which is an MQTT connection, and it’s not to the cloud but it could be. It’s just to that docker container. So, I’m going to create that, and then on my local UNS connection, I’m just going to pick part of the namespace that I want to listen to. And for this case, the OEE use case is the only one that’s globally relevant. So, I’m going to grab that one, and I’m going to grab everything that’s being published to that topic. If I do a test read on this, I’ll see line packer 1. And then, similarly, I’ll create a flow that says to Global UNS, and I’m going to take that local UNS input, that’s the OEE input, and I’m going to send that out to global UNS pipeline — in this case, I have a brief pipeline as well. So, I’ll jump in and show you the pipeline right after this. So, to global UNS pipeline. I’m going to turn that on. You want this to be an exception in an event-based flow, which means that any time an event comes in from the broker, we want to make sure we process that and send it along. So, that’s that configuration there. I’m going to save that, and that’s running. So, if I jump out to my 45246 — this is my Enterprise HighByte instance or broker — and I launch its UNS client and connect, I should start to see if I’ve done it right, which clearly I haven’t. So, let’s just jump in real quick and look. So, it’s a global pipeline. We have got a pipeline error. All right:

“Expression engine failed to calculate expression. TypeError: Cannot read property “replace” of undefined.”

One second, I’m doing a light transformation here. Oh, I got it. So, in here, in my local UNS input that I created, I actually want to include the metadata, which is the topic. It changes the shape of the data, but what I’m trying to do is replicate this part of the UNS up to the Global UNS. And in my pipeline, what I didn’t show was when I actually output that, I’m using the event. So, this is the partial topic for my local UNS that I’m replicating. Then I’m putting it under a Portland production node. So, if I jump up here now, all’s working, and you could imagine Seattle or other things would come in. But basically, that OEE use case is now up here. And now that it’s up here, I’ve cheated here, and I’m not going to create this, but I have a cloud flow that’s going to send this to another pipeline that does the cloud processing. So, I’m taking all the OEE stuff that’s coming from Portland, and I’m going to send it on to its own pipeline. This one’s a little more complex. It’s coming in, buffering all the updates in 10-second chunks, turning that into CSVs, sending it to Blob storage, and it’s also sending it to AWS SiteWise. And if I jump out to finish this demo, you’ll see Portland production. I’ve defined these parts statically, and then I’m landing all the OEE data under here. So, if I jump to measurements and we cross our fingers, this should be pretty recent. Yeah, 11:48 am. So that’s updating to SiteWise. Then if we jump out to my test bucket in Blob, you’ll see I’m generating CSV files. And, just to finish it, if we download one of these and open it up, we can see the data in CSV format. So, done. It jumps around but take part of that namespace, move it to the cloud. There you can decide how you’re going to integrate that with your Cloud Tech, whatever that be.

Torey Penrod-Cambra: Aaron, props to you and the R&D team who did a ton of work on pipelines. That is so much more a sophisticated tool than its first release. It’s incredible.

Aron Semle: Yeah, it’s coming along. We’ve got more to do, but yeah, we’re excited.

Torey Penrod-Camra: So just a time check. We can answer questions up until noon Eastern time. So for anyone who wants to stay on, there were a couple of other questions submitted in the registration forms that I can pop up now. I think you guys have put a lot into the chat and Q&A too. Thank you, John and Brad, for answering so many of these questions. And again, if we didn’t get to them, we’ll follow up with you by email. Let me go ahead and share this one we answered. I do have one bonus round question though, which I think pertains to this since we’ve been talking about deployments and architectures. We got this question from Marcus, so thank you, Marcus.

How do we centrally manage multiple HighByte installations at multiple site in the optimal way?

So short question, big topic. Who wants to take this one?

Jeffrey Schroeder: So similar to what Aron had mentioned earlier, we had a series of edge UNSes right, and you might need DataOps for one particular use case for, let’s say, machine-type information. Others might be more application-based or dealing with files. Others might be with sensors or time series-type information. Those obviously can be published up to an Enterprise style UNS, and I think what Aron demonstrated earlier was that the Enterprise UNS can serve as an almost canonical data source. But then you also might need additional DataOps in the cloud layer to enable interoperability with things like SiteWise or ERP systems or any other applications you might have in the cloud. But for this particular question, it was around how do I orchestrate? So we can see in this architecture here we have four instances of Intelligence Hub that are running, and obviously, we have to life cycle these things over times. So we have something called what’s called Enterprise Administration, so essentially all hubs have an interface. It’s an outbound WebSocket connection. Essentially that allows Intelligence Hub to be deployed deep in an OT layer, and we can network all the hubs together to share the configuration. So, in the upper left-hand corner there, I have a CI/CD pipeline, so we have a lot of customers that might have a special Intelligence Hub for doing development, a sandbox, and then they might promote that to QA and bed it out, then that finally gets pushed to production, and then they might push out those models. So the nice part about this whole configuration plane here is that you can compare modeling information and such. This is a really great way of keeping projects in sync at all corners of your enterprise.

Torey Penrod-Cambra: Nailed it, Jeff, thank you. You came prepared for that one.

Jeffrey Schroeder: Yep!

Torey Penrod-Cambra: Good. Okay, I have one more question in here, I believe. And that is:

“I would like to understand cloud deployment options. What roles can UNS and Industrial DataOps play in the cloud?”

And I’m asking that question, but also just want to reiterate a point you had just made, Jeff, that for customers that aren’t going down the UNS path, a DataOps application like HighByte Intelligence Hub still offers tremendous value to the business in terms of integration modeling. There can be many different use cases, not just building a UNS. Although, of course, that’s why we were here for the Conversation Spotlight today. So, Jeff, do you want to start with this one, and then maybe Aron, you wrap us up? We still have a few more minutes.

Jeffrey Schroeder: Sure, so, yeah, if you think about UNS, right, as the name sort of implies, it’s unifying all the disparate name spaces that we have in our enterprise. So, you know, at a particular site or particular lines, information or the raw industrial data might be structured in such a way — and the whole basis of the UNS — is to make it consumable and consistent, to normalize all these things. When you start thinking about the consuming side of that, so that could be specific cloud services to ML and AI type workloads or visualization, or that could be more traditional business applications like ERP, those things essentially have their own schema or namespace on there. So there quickly becomes a need for a higher level of DataOps, where that modeling and transformation and broader data engineering capabilities are necessary to enable interoperability, but say between our Enterprise UNS and our applications. What do you think, Aron?

Aron Semle: Yeah, I think it’s a great answer. I think I’d take a slightly different twist. I don’t think the question is asking this specifically, but I think when you think of factory-level UNS, I think you want to start to think about whether it’s deployed at the cloud. I mean, it’s orchestrated at the cloud; it’s deployed at the edge. So, like a lot of companies are starting to use cloud infrastructure DevOps tools to deploy this stuff at the edge and manage it there, where it runs. So UNS and cloud are eventually tied together anyway. And then DataOps is core to all of that.

Torey Penrod-Cambra: Great, okay, I like that we also have some customers answering questions in the chat too, so thanks — that was helpful! John and Brad are doing their best to get to all of these. I did see a question about — just maybe to wrap this up — just in terms of next steps, how do you use HighByte Intelligence Hub, how do you get your hands on it if you don’t have access already? Both the executable and the container has come up in these questions a few times. I mean, I can answer this myself too, but Jeff or Aaron do you want to talk about like how people can start experimenting with HighByte Intelligence Hub and download the software?

Jeffrey Schroeder: Yeah, so obviously a guide link was posted in the chat there. We have a trial program to get involved with the solution. We try to encourage agility and, you know, consequence-free trials of the product. So it’s not this big, massive undertaking. But in terms of deployment technologies, we always have a Docker image available for every release in beta, so that can be quickly deployed in your infrastructure.

Torey Penrod-Cambra: That’s great. So you’ll see that there’s just a link now in the chat to the trial program. If you don’t already have access to the software, just fill out that form, submit it, we’ll get back to you quickly. We try to turn things around quickly, get you access to the software. And then you have to have a trial accelerator kit. So there are some exercises in there and dummy data to help get you acclimated with HighByte Intelligence Hub. And once you have a trial license or production license, you can actually flow data using HighByte Intelligence Hub. And with that, it is 11.57 am. So just to reiterate a few things from housekeeping, this webinar was recorded. You’ll get the recording in the next 24 hours. If you saw any images or reference architectures today that you’d like to have on hand, just let us know. You can reach out to Jeff, Aron or myself directly, or just email info@highbyte.com, and we’ll try to host these office hours more often. And we’ll keep asking you what you want to see most. So please don’t be shy about submitting questions in the registration form, and we’ll tackle as many of your questions as we can next time. So with that, thank you so much for your time today, and we look forward to seeing you again soon. Thanks.

--

--

No responses yet