Creating a hardware and software ecosystem to bring AI-powered algorithms for autonomy into space missions – with KP Labs

Podcast

Episode 22 of the Space Industry podcast is a discussion with Helena Milevych and Jakub Nalepa from satsearch member KP Labs.

Contents


Episode show notes

KP Labs is a Polish NewSpace business founded in 2016 that develops autonomous spacecraft and robotic technology.

In the episode we discuss creating a hardware and software ecosystem to bring AI-powered algorithms for autonomy in space missions. We cover:

  • The current maturity of AI systems for autonomous operations
  • The types of missions and requirements that could require such functionality
  • KP Labs’ Smart Mission Ecosystem and the benefits it can bring to missions
  • Predictions on the growth of the market for AI-powered autonomy in space missions

The product portfolio of KP Labs

Oryx is a modular flight software tool developed for the mission control of small satellites. It manages all satellite tasks – namely, processing telecommands sent by the operators, monitoring the power budget, executing pre-defined schedules, managing emergencies and handling data.

The KP Labs Antelope can work as an On-Board Computer (OBC) with an optional Data Processing Unit (DPU) or as a data processing unit (DPU). OBC is the powerful heart of the satellite, responsible for satellite control and basic task performance such as communication handling, monitoring the satellite’s subsystems, handling the classic Fault Detection, Isolation and Recovery (FDIR) mechanism, and performing planned tasks.

Oasis is a single-board, CubeSat PC-104 compatible electrical ground support equipment that serves as an interface between the PC-running satellite systems simulators and the hardware engineering model.

A CubeSat standard-compliant Data Processing Unit (DPU) designed for the application of AI solutions in space. Leopard uses Deep Neural Networks to process data on-board and features FPGA to implement deep learning algorithms. The system has a throughput of up to 3 Tera Operations per second.

KP Labs’ The Herd - AI-powered algorithms for Earth Observation (EO) - is a set of AI-powered algorithms designed for EO data analysis.


Episode transcript

Hywel: Hello everybody. I’m your host, Hywel Curtis and I’d like to welcome you to ‘The Space Industry’ by satsearch, where we share stories about the companies taking us into orbit. In this podcast, we delve into the opinions and expertise of the people behind the commercial space organizations of today who could become the household names of tomorrow.

Before we get started with the episode, remember, you can find out more information about the suppliers, products, and innovations that are mentioned in this discussion on the global marketplace for space at satsearch.com.

Hello and welcome to today’s episode. I’m joined by Helena Milevych and Jakub Nalepa from satsearch member KP Labs. KP Labs is a Polish NewSpace business founded in 2016 and it develops autonomous spacecraft and robotic technologies.

Today, we’re going to be discussing how to create a hardware and software ecosystem in order to bring AI-powered algorithms for autonomy, into space missions. Really a timely topic, a topic that a lot of businesses are grappling with and thinking about for the future.

Helena and Jakub, welcome to the podcast. Is there anything you’d like to add to that introduction?

Helena: Hello, thank you very much for this introduction. I think you perfectly described what we are doing as a company.

Hywel: Okay. Let’s get into today’s topic. So from your perspective, how mature do you think AI systems are today that can be implemented in space missions? Could you maybe break down what AI powered algorithms for autonomy are, what this phrase really means for missions today against existing capabilities, against the operations that were more accustomed to?

Jakub: Yeah, I think there’s a very great question. So probably we can break this AI algorithms into different specific parts, because it’s very much depends on the mission that we are trying to target.

So we have quite a bunch of different missions. Like for instance, for Earth Observation, we may employ AI at different levels. For instance, we can think of capturing a very huge amount of data onboard the satellite. Given then the huge amount of data, it’s nice because we can try to extract some useful information from raw data.

It is also difficult to transfer and to analyze onboard and to take some actions based on this data that is captured just in orbit. So this AI for I’ll say non-critical missions like Earth Observation might be used for extracting knowledge from raw data onboard the satellite. And basically you can quantify the maturity of AI using for instance, TRL, which is technological readiness level.

And also you can break down AI into different parts and different algorithms are at different maturity levels. For instance, if we analyze image data, we are pretty much advanced because we’ve been doing this for years for different imaging modalities. And now we are in the process of deploying such algorithms on board, which is not obvious because we are in a very extreme execution environment.

We need to make sure that the algorithms are robust enough to be deployed in space missions, which is very different from the ground applications. We are actually getting there, but there’s still some things to do depending on the mission and on the characteristics of a specific mission, if we are just about the target.

Helena: And I would also like to add some examples while we’re speaking about missions, there are also some other examples of how we can use AI in space.

I would like to bring here a few examples that are not only mission-related for example, in 2018, German Aerospace Center (DLR) launched Simon to the ISS, to the International Space Station, which was a crew interactive mobile companion. And it was able to see, to speak, to hear, to understand, and of course, to fly.

It stayed for 14 months and then it was followed up by the Simon-2. I think it’s another example of how we can use AI in terms of assisting people and to support astronauts.

And there is another example related to the ISS, but this time by JAXA, the Japanese space agency, and it is called Int-Ball. As you can imagine, it is a ball. It is supporting the astronauts on the international space station with taking pictures or taking videos. So I think we already are experiencing this AI in space but not only missions, but there is a wide range of different scenarios how we can use it.

For example, NASA also cooperated with Google to train AI algorithms, to search for a new data from the Kepler mission. Or for example, we have the project that is called artificial intelligence data analysis that aims to search data from ESA, from the European space agency and from NASA, from all around the solar system to bring new discoveries, to reveal different anomalies.

I think it’s already happening. Maybe it’s not a mass scale, but it’s already there. And maybe we will also bring a few examples from our company, from our activities, what we are doing. With Antelope, an onboard computer that aims to analyze the telemetry data, and we submitted our idea via the website for ideas by ESA. We wanted to analyze the telemetry data from OPSAT to search and to train, to understand whether we can predict the anomaly in telemetry data. As you can imagine, the more satellites we have, the more difficult it is to get these anomalies for an operator.

This is one example. There’s also a few other examples, like super solution reconstruction for a single or multiple images. There’s also hyperspectral image segmentation. And I think there are more and more examples, but maybe the real scale we’ll see in the upcoming missions.

Jakub: I can add something on that because Helena brought an interesting example of Antelope, which is this system for detecting anomalies from telemetry data, because it’s not only about training and deploying them onboard satellite, but also how can we trust that kind of algorithm operating onboard satellite.

We definitely don’t want to lose a satellite. We need to prove that it can actually work in the wild and for this reason where verification and validation process is like super important in the deployment chain because we need to prove that things will ultimately work in practice. That is a critical component in space missions.

Hywel: Yeah, absolutely. We talk about this all the time from the hardware perspective, but as you say, it’s equally as important from the software or the, or even the algorithm level and a very interesting concept that you mentioned there discussing the TRL of the individual algorithms.

I think we break things down to that level of granularity, but if the algorithm is so critical to the success of the mission, then of course it is vital and important. And thank you for sharing all those examples as well. I think quite often we discuss, uses of onboard AI and onboard process in earth observation.

But there’s many examples out there, many more different types out there. We’ve discussed individual missions there, perhaps if you wouldn’t mind touching on some of the types of missions and in particular, the technical requirements that you see in the industry that would require autonomy or would benefit from autonomy based AI onboard.

Helena: I think it’s a very interesting question. Probably at this point we are mostly thinking about the earth observation missions, but what we also see in the industry is that more and more companies are looking for different types of AI based missions like in-orbit servicing, space debris removal, asteroid mining, maybe this is a very future concept.

We already see missions that are working towards this direction, or for example, deep space missions. For example, if a satellite cannot be reached easily, then some level of autonomy and some level of AI is really useful. For example, we have more and more lunar missions we are speaking about and Martian missions.

And this is the point that it is really useful and it’s really helpful. But in terms of requirements, this is a very interesting point, I would say. And we are looking for a trade off between the power availability and one hand and computational power on the other hand. For example, in terms of in orbit operations, like manufacturing, printing in orbit.

We have to think about this balance. What do we expect and what we can have in terms of hardware and software. And I would also say that at the moment, our company’s looking at the CubeSat format but of course we are also thinking about bigger satellites. If it goes for the requirements while talking with different partners or clients, we see this sensor-specific approach because there are different types of sensors.

Of course there is multispectral, hyperspectral, RGB, lidar, radars, and all the different types of payloads that you have. So we also have to think about this to take it into account. And this is another issue because we have a lot of different interfaces that we have to create, to develop.

I would say that at this point, it’s very much about trade off between what we can get at this point and what industry wants from us. But of course the huge advantage that we see is to the reprogrammability, thanks to the FPGA. The thing is that our computers, our hardware is based on the FPGA. So when the satellite is already in orbit, we can reprogram it.

It means that the same satellite to us before, different actions, different mission goals within the same mission. So for example, half of your orbits, the satellite is doing earth observation and for example, shipment tracking. So there are different things that we can work on. Thanks to the AI. Thanks to this, let’s say agile approach. What else I can say here is that in terms of mission types, we are first of all, thinking about the missions that are looking for fast response, or maybe near real-time data. And this is where this onboard data processing is really useful and helpful.

Jakub: Yes. Also, it’s important to understand the benefits of employing AI on board, because we are bringing that kind of data-driven algorithms onboard.

And what can we actually get out of this? Helena brought an important point of real-time processing because in specific applications, it might be the case that for example, the data that is old, it is not useful at all. So we need to get this data as fast as possible to actually extract some value from this raw data. And that is where AI could help. It could accelerate the process of downloading important information from the in-orbit operation.

Hywel: Right. Great. So this is the reason why, as you say, time and effort is invested in ensuring that it works on board, as opposed to on the ground, which is really interesting. We hear so many times space about trade-offs. And that’s important to remember at every stage of mission design and development.

Now, at KP labs, you’ve developed something that is described as the Smart Mission Ecosystem. I wondered if you could describe this just briefly for us and explain what the parts of the ecosystem are and how it ultimately delivers valued to space missions, accounting for these trade-offs that we’ve mentioned, maybe some of these use cases that you’ve discussed earlier?

Helena: So maybe let’s start from the explanation what’s the Smart Mission Ecosystem. It is consists of five elements. Software part and hardware parts. So let’s start from the, let’s say very top.

This is Oryx, which is onboard computer software. And then we have Antelope, which is onboard computer with the DPU module that already was mentioned today. And there’s also leopard, which is a data processing unit. There is also Oasis, which is EGSE and unfortunately will never fly and we’ll never see space by its own. But it’s supporting the mission integration and checking the whole system, then everything works fine. And of course there is the Herd. It’s a bunch of different AI algorithms for the earth observation, but not only.

We’ve already mentioned the telemetry data for this analysis, and maybe let’s speak about the motivation behind it. So the thing is that while building our own mission Intuition-1, we mentioned that there is NewSpace and there is a CubeSat format, but the delivery chain is very fragmented. And even if you buy different components, different sub-systems from different companies, at the end, you still have to do some work from your side. You have to adapt the software for your mission.

You have to get these additional efforts where everything works correctly. So we came up with the idea that it might be a good idea to create an ecosystem where everything works with everything. And that’s why we call it ecosystem. And we think that the crucial thing here is that thanks to these components, we can bring missions faster and cheaper, which is quite important in terms of constellations.

And then again, you spent less time. And the missions are safe because there is no need to create everything from the beginning. It’s like a puzzle. You just bring different components and they are already developed the way that they interact with each other and they speak to each other.

Jakub: Yes. The idea is to just keep the brain very close to the eye because we are capturing the image data, which is huge. It might be difficult to downlink as Helena just mentioned. And the idea is the process as much as we can onboard satellite to extract value, and this value might be very different than maybe dependent on the use case.

And it actually will be dependent on the use case. We’ll extract different value for agriculture, for object tracking, et cetera. So also the idea is to uplink the pre-trained models back to the operating satellite. So we can perform training on the ground and the uplinking the model straight to the satellite because we’ll be ready for using deep learning onboard.

And then we can use pre-trained models onboard to actually process the new data that’s coming from sensor, which will also mean that we can be decrease the amount of data transfer. As I mentioned, it is important because we want to get this data as fast as possible, in the real time if possible. If this is happening we can take faster actions in specific applications, which might be crucial because otherwise the data would not be useful at all.

If it’s for instance, delivered too late back to the ground. So that is why we are bringing the operations onboard satellite, and we want to not transfer the entire data. Just the important bits extracted from this data onboard Intuition-1.

Hywel: That’s great. And that’s a critical challenge that as you say, definitely integrators face when using their components from different suppliers. So thank you for describing the ecosystem there. And I love the names of the individual names of the products as hard, the Leopard, the Oryx, the Antelope, and then the Herd, where it’s coming together.

Alongside the development of this hardware and software ecosystem, I know KP Labs is also developing this mission Intuition-1, in order to demonstrate some of this technology. Could you just describe to us some of the key features of this mission and what you’re looking to achieve?

Helena: Maybe I’ll start with an introduction of our mission. It’s a hyperspectral mission with the onboard data processing and the idea behind it was that while we have 150 bands, it is quite difficult to downlink all the data to ground station. So we came up with the onboard data processing chain that later on became the Herd and Leopard.

And the thing is that it is a 6U cubesat and it is about launching in the very end of 2022, towards the very beginning of 2023. It might be considered as a flying lab, because thanks to this reprogrammability, we’re able to check different scenarios like agricultural scenario or any other scenario where the hyperspectral data is required.

Thanks to the Leopard, that could be easily reprogrammed and thanks to the algorithm. It could be pre-trained and uplinked to the satellite. I think it also important from the other perspective, because some of our products will get the flight heritage, thanks to this mission.

Of course, in terms of Oryx, it was already tested in orbit in the end of 2020. But for example, the Herd, it will be fully deployed on the Intuition-1. Of course we have smaller projects with ESA the moment and we prepare for this.

But anyway, for the first time, the Antelope will fly in full let’s say size on our own mission. So I think it is very important for us and it is challenging project in itself, but I think it’s also quite fascinating and we’re gaining a huge experience while developing it.

Jakub: I could actually add that I liked the flying lab Helena mentioned because it is exactly what we are trying to do. We are trying to build the satellite that is application-agnostic. So we can uplink any algorithm given that we have the data to train on and we can deploy any type of deep learning algorithm on board the satellite, which would be, I think exciting because we could target different applications that are mostly related to earth observation. We can actually do anything with the hyperspectral data we’ll be capturing onboard.

Hywel: Brilliant. Well, best of luck with the mission. It’s an opportunity to demonstrate the system that could potentially have more value during the mission’s timeframe, due to the ability to upload the new models or the new algorithms or retrain the algorithm and uplink them.

So the value in year two of the mission could be higher than year one, which is great. Because normally a satellite is launched and it is launched. So that’s great.

On that, I think the sort of value that you’re talking about, where do you see that being utilized in the industry? You know, what are the typical profiles of customers that you’re looking to service with the smart mission ecosystem?

And you’ve given us some examples already of the types of missions, but yeah, maybe if they can use AI on board and novel processing, et cetera, but maybe just to focus a little bit on, on the smart mission ecosystem itself.

Helena: First of all, we are looking at the NewSpace market. And right now we’re focusing on CubeSats. Of course, in the future, we want to expand the product line and we want also to cooperate with bigger satellites. And this is the idea, but at the moment we are focusing on cubesats and it might be, cooperation from different levels, from different angles, but we are very open to the cooperation with the commercial missions, but from our point of view,

Europe may not be at the level that we would like to have. So I would say in Europe, we are mostly focusing on students missions and on agency projects. This is the European reality, but we’re also looking for different mission types.

As we mentioned here, it might be, some feature extraction for the Earth Observation. It might be missions where near real-time data is required. For example, in terms of crisis management or flood monitoring, forests, Jakub already mentioned, might be agricultural use case scenario when we’re speaking about soil mapping. It’s a wide variety of different mission types. So I also think that it’s a huge advantage of this ecosystem that it might fit to different mission types.

And of course, the idea that in terms of earth observation, while I found such data that tried not only 5 to 15% information or data that is sent to to the ground station is useful. So it means that we have a lot of satellites, but their data that is sent to the earth, it’s not that much. So the idea that we want to have a better quality, for example, thanks to the cloud detection algorithms, instead of sending all the data to the ground station, we can send only data and pictures that is useful.

Instead of having all the pictures in this chain, we can focus on preprocessed pictures that would be really useful. So I would say that cloud detection or bad pixel mitigation, it’s a great example on how we can use this technology right now. Not in the future, not in 10 years, but right, right now, it is already available and I think it will bring additional value to almost every mission.

Hywel: As you say that it’s about increasing the value of existing missions and existing technologies simply by increasing the value of the data. So that’s fantastic. You spoke there about the customers you’re looking for today and the shape of the industry and the market today.

Just to finish up, if I could put you on the spot a little bit and ask where you see the market for AI-powered autonomy in space missions, the sorts of use cases, examples we’ve discussed today, where do you see it going in the next three to five years?

Helena: I think in terms of space, three to five years is quiet a short perspective. And I would say that most of the missions that we’ll see in this timeframe they’re already planned and they’re under development. And if we look at the different agency strategies, then we can see what’s about to happen.

And there’s different interests in AI missions. And there is also another statistic. They found that over 10,000 satellites will be launched within next 10 years. Around 1000 will require the onboard data processing.

So we can see that there is a huge potential for such applications and in terms of how our own missions and our own let’s say products, I can highlight Phisat-2 mission. This is an earth observation mission where our algorithms, the Herd will fly on there. There is also the Opsat mission that I already mentioned today before. We submitted this anomaly detection scenario with to ESA, but there is also a hypothetical scenario where we can use our Leopard in space debris removal.

We see more and more missions also European, but not only that are interested in cleaning, let’s say the space and making space more safe. And I would say this is crucial because if we have not operated spacecraft, it is quite difficult to maneuver. It is quite difficult to use space. This is another example and what more we can add here is PW-Sat 3, a mission in cooperation with the Polytechnical University for Warsaw. Our Oryx on board computer software and an Antelope computer with the algorithms are about to fly that.

Hywel: Fantastic. Well, I think that’s a great place to wrap up. Thank you both for sharing all these insights with us today. It’s been really interesting to learn about how KP Labs is developing this ecosystem and looking to bring onboard data processing into more application areas and obviously best of luck with the Intuition-1 mission to be your own demonstration purposes.

Just wanted to thank you both for being with us on ‘The Space Industry’ podcast today.

To all our listeners out there, you can find out more about KP Labs on the satsearch platform satsearch.com. Feel free to use the request function on the platform for any questions you might have in any expressions of interest in the company’s technology or for documentation and whatever else might help your procurement journey.

Thank you for listening to this episode of ‘The Space Industry’ by satsearch. I hope you enjoyed today’s story about one of the companies taking us into orbit.

We’ll be back soon with more in-depth behind the scenes insights from private space businesses. In the meantime, you can go to satsearch.com for more information on the space industry today, or find us on social media if you have any questions or comments.

To stay up to date, please subscribe to our weekly newsletter and you can also get each podcast on demand on iTunes, Spotify, the Google Play Store, or whichever podcast service you typically use.

artificial intelligence
earth observation
engineering
obc
payload processors

related articles

Blog home

Microsatellite and CubeSat platforms on the global market

CubeSat thrusters and small satellite propulsion systems

Ground station service providers: an overview of telemetry and telecommand communication services and networks for small satellites