A guide to advanced data processing and AI for satellite missions

Webinar

Today’s satellite payloads have the capability to capture a greater volume of data than ever before.

However, this is placing increasing demands on a satellite operator’s data management, storage, and processing systems.

The process of downlinking satellite data to the ground also faces bottlenecks; from the fundamental limits of satellite passes and ground station coverage, to issues of interoperability between ground segment systems and end-user applications.

The use of artificial intelligence (AI) capabilities and other advanced on-orbit data processing technologies are offering solutions to these problems for smallsat missions and services.

The video below is a recording of a satsearch webinar entitled a guide to advanced data processing and AI for satellite missions.

In the webinar you can hear first-hand from experts on the production of useful information and valuable data products using advanced processing and AI.

To stay up to date with future events and all of other work at satsearch, as well as to receive a weekly commentary on trending stories in the space industry, please join our mailing list today.



Contents


Presenters

The presenters in this webinar event were all paying members of the satsearch membership program.

The individual slide decks for the presentations are freely available on each company’s supplier hub linked to below, alongside each speaker’s details (in order of appearance):

  • Helena Milevych, Head of Product Development, and Michał Gumiela, Systems Engineer, at KP Labs
  • Zoltán Lehóczky, Co-founder and Managing Director of Lombiq Technologies
  • Mathias Persson, Senior Business Development Manager at Unibap AB
  • Edwin Faier, President and Director of Business Development at Xiphos

Products

The following systems are all manufactured by the satsearch members who presented in the webinar and were each referenced in the talks, or are related to the products discussed:

Oryx is a modular flight software tool developed for the mission control of small satellites. It manages all satellite tasks – namely, processing telecommands sent by the operators, monitoring the power budget, executing pre-defined schedules, managing emergencies and handling data.

The KP Labs Antelope can work as an On-Board Computer (OBC) with an optional Data Processing Unit (DPU) or as a data processing unit (DPU). OBC is the powerful heart of the satellite, responsible for satellite control and basic task performance such as communication handling, monitoring the satellite’s subsystems, handling the classic Fault Detection, Isolation and Recovery (FDIR) mechanism, and performing planned tasks.

Oasis is a single-board, CubeSat PC-104 compatible electrical ground support equipment that serves as an interface between the PC-running satellite systems simulators and the hardware engineering model.

A CubeSat standard-compliant Data Processing Unit (DPU) designed for the application of AI solutions in space. Leopard uses Deep Neural Networks to process data on-board and features FPGA to implement deep learning algorithms. The system has a throughput of up to 3 Tera Operations per second.

KP Labs’ The Herd - AI-powered algorithms for Earth Observation (EO) - is a set of AI-powered algorithms designed for EO data analysis.

Lombiq Technologies' Hastlayer system turns .NET software into FPGA-implemented logic circuits, resulting in faster and more efficient operations for image processing, SDR communication, data compression and other applications. Hastlayer can be used both in ground segment HPC applications (on-premise or in the cloud) and on-board satellites if a Xilinx Zynq-based On-Board Computer (OBC) is used.

The Unibap SpaceCloud® OS is a Linux-based operating system designed for space applications. Together with Unibap's software framework, and a wide application suite, it facilitates simple and reliable execution of Edge Computing, Autonomous Operations, and Cloud Computing in space. SpaceCloud OS’s Linux heritage combined with its reliability and robustness enables rapid software development for a wide variety of users, including those without previous space experience.

The Unibap SpaceCloud iX5-106 is designed for space applications. The iX5 family is Unibap’s most power-efficient and reliable computer solution for large and small spacecraft. It combines radiation tolerance and flight heritage, boasting a proven TRL 9 maturity. The iX5-106 model features an AMD Steppe Eagle Quad-core x86-64 CPU and AMD Radeon GPU paired with SATA SSD storage, a Microsemi SmartFusion2 FPGA, and an Intel Movidius Myriad X Vision Processing Unit.

Unibap provides Application Development Systems to enable mission customers to get a head start in their software development, and third-party SpaceCloud® users to create new applications. The iX5 and iX10 Application Development Systems (ODE and ADS-W and ADS-X, respectively) contain the same CPU and GPU architectures as our engineering and flight models with an easy-to-start HW design.

A 32g (with RJ45 connector) system, with typical power of 1W, based on a hybrid environment of CPUs and reprogrammable logic. The Q7 is a flight-proven processor board based on the Xilinx Zynq-7020, including ARM dual-core Cortex-A9 MPCore processors supported by programmable logic resources.

A 24g (without RJ45 connector) system, with typical power of 1W. The Q7S consists of a Q7 card equipped with space-ready software and firmware based on a hybrid environment of CPUs and reprogrammable logic. The library of logic and software functions is augmented by onboard analog and digital I/O.

Daughterboard for Xiphos’ Q7 hybrid processor card, enabling the Q7 to be inserted into existing systems with high-bandwidth video or imagery streams. Featuring 2x Camera Link (2x Base, or 1x Medium, or 1x Full) interfaces, 4x SpaceWire interfaces, and 4x USB 2.0 Master ports, serial and GPIOs.

A 64g (with power barrel & RJ45) system based on a hybrid environment of multi-core CPUs and reprogrammable logic. The Q8 includes a Xilinx Zynq UltraScale+ Multi-Processor System-on-Chip (MPSoC) Processing FPGA & memory resources such as LPDDR4 RAM (with EDAC), 2x QSPI Flash (NOR), and 2x eMMC.

A 56g (without power barrel & RJ45) system with > 25 krad radiation tolerance. The Q8S consists of a Q8 card equipped with space-ready software and firmware based on a hybrid environment of CPUs and reprogrammable logic. The library of logic and software functions is augmented by onboard digital I/O.

The Q8J extends the capability of the Xiphos Q8 processor, adding support for high speed JESD204B interfaces and access to external DDR3 or DDR4 memory. Suitable for SDR applications, the Q8J is delivered with a detachable PIM with standard interfaces, debug LEDs & other lab development features.

The Q8JS extends the capability of the Xiphos Q8 card with support for high speed JESD204B interfaces and access to external DDR3 or DDR4 memory. Suitable for SDR applications, the Q8JS consists of a Q8J card equipped with space-ready software and firmware plus a library of logic functions.

The Q8 SDR (Software-Defined Radio) Dock is a daughterboard for Xiphos’ Q8 hybrid processor card enabling integration of a GOMspace NanoCom TR-600 SDR module, based on Analog Devices’ AD9361 wideband transceiver RF System on a Chip (RFSoC). Featuring PPS, CANBUS, serial, UART, & USB interfaces.


Webinar transcript

Please note that while we have tried to produce a transcript that matches the audio of the event as closely as possible, there may be slight differences in the text below. If you would like anything in this transcript clarified, or have any other questions or comments, please contact us today.

Hywel, satsearch: Hi, and welcome to everybody. We’re just going to leave it just a minute or two to give people a chance to stop snoozing their calendar notifications and join the webinar. It’s great that we could have so many people here today. Always tricky to pick the right time zone for these events when we’ve got the companies and participants across the world, but I’m, hopefully this’ll work for a lot of people, okay. I think we can make a start. Quite a few people that joined there. Thanks. Thank you to everybody. Who’s who’s taken the time to spend time with us today. My name is Helen Curtis and I am head of market research and we’re hosting today’s webinar on a guide to advanced data processing and AI for satellite missions.

The aim of this webinar series is to really delve into the technical topics that are currently gaining a lot of interest and attention in the industry and speak to the expert in those topics, the people with real firsthand knowledge of of the missions, of the technologies and find out what these innovations are really about and what engineers and companies, student teams and anybody else in the industry is interested, what they need to think about making decisions for the missions of tomorrow.

We are going to hear from four different experts four different companies, five people, but four different companies. And all of these companies are satsearch members. And we, during the webinar, we will also have the chat function running during which you can chat to a chat to each other or a chat to us if you like.

But for specific question, We’re not going to run a a, an audio Q&A at the end of the session, we found that sometimes these can take a little bit long and people can’t quite get the answers that they need, or people, sometimes people can’t stay right until the very end, et cetera. So we’ll be running the text-based Q&A throughout the session using the zoom function.

So you can find this at the bottom of your screen, you can open the Q&A panel, and then you’re able to address questions generally to all of the participants or all of the presenters today, or to individuals. And they will do their best to answer those throughout. Obviously not while they’re talking, unless there are some real geniuses here, but they will do the best to answer those before the end of the session, we’ll have a little bit of extra time at the end, too.

So I think without further ado, we can start, we can get it to the first the first presentation here today. So firstly, today we’re going to hear from Helena Milevych and MichaÅ‚ Gumiela from KP Labs. Helen, if you’d like to share your screen, we will we’ll get going on the event, the nephew, and again, the audience have any questions please let us know in the Q&A or the chat.

Helena, KP Labs: Okay. It’s first that I can ask her a screenshot while. Yes. Perfect.

Okay. I hope you see it. First of all, thank you very much for for inviting us for organizing this webinar. I think the topic is really interesting. And so we all have something to to bring to the discussion. So as it was already mentioned, the company name is KP labs. And today along with MichaÅ‚ Gumiela who is our system engineer we’ll show you some examples how the onboard data processing and AI could be used in space.

So just to give it short introduction about us, why we are here, why we have some expertise in this area. So we are from Poland. The company was established in 2016 and right now we are almost 60 people. And if you look at how competent this is our products and projects, we could split them into four areas of interest, which is imagery, software computing, and AI here is the most crucial, most important part.

So it goes quick summary, just to give you some some understanding of what we are doing within this five years, we’ve completed five sorry eight projects right now, ongoing. We have 12 more projects. We are about to start four big projects with the next one to two months and the the overall budget for this projects for over 10 million Euros.

So why we’re taking about onward data processing and AI in general, why it is so important for space? First of all, if you look at different predictions and different researchers one of the example that I found is created by Euroconsult that within 2020-2029 almost 12,000 satellites will be launched, but over 1000 satellites will use onward data processing, which is over a 100 satellites a year, which is a huge difference comparing what we have right now.

And according to a different different research predictions for 2030, we will successfully mine the moon or whatever by China certainly, or we’ll operate assets from the moon. We’ll manufacture in the space directly, or AI will be common, faceless face. If you think about areas of interest, area of space, where it could be used.

So it could be a earth observation with increasing the efficiency of satellite images that we use today, according to different numbers are we are using around 15% of all satellite imagery data, which means that thanks to the cloud detection or to some big disaster mitigation or some other techniques, we can increase this number.

And now other area could be risk management that will be beneficial for us as people on the earth. There’s also deep space. with this time delay in the communication and we have to have autonomy and semi-autonomy in, in this area. And this is this is must, there’s also space debris missions.

And there are more and more companies, more and more agencies there that are interested in in space debris, this is something where we could also use onboard data processing and an AI and last but not least is human flights, not only flights, but also human inhabitants that are about to happen on Mars or any other planets.

So this is this is the areas that are quite important for us, our product, as a company, what we are doing, we have smart mission ecosystem. Which he is hardware, software and algorithms combined together where everything works with everything. So we have like Oryx, which is modular onboard software, we have Antellope, onboard computer.

We have Leopard which is data processing unit to process and pre-process data onboard satellite. There is also a lion, which is bigger brother of Leopard dedicated to bigger satellites. And of course there is the Herd which are bunch of different algorisms for Earth Observation, but also for telemetry data analysis to check whether everything is working correctly with the satellite.

And of course there is, Oasis, which is PGC. And the idea that we want to check beforehand, if everything works correctly, with the satellite or the idea of these these products and our projects in parallel is that on one hand, we want to increase machine processing capabilities and mission safety and control on one hand.

But on the other hand, we also want to give the mission development time and mission cost because it is super important in case for speaking about constellations or mega constellations. So this is why we see a huge potential for onboard data processing and AI, here where I pass the word to Michał Gumiela who will speak about more technical issues.

Michal, KP Labs: Yep. Thank you, Helena. Thank you all for joining us. So my part of the presentation will be a little bit different. I think that you’re already convinced about values behind in-orbit processing. So I would like to touch them challenges and discuss with you how to tackle them. So at first, from growing interest and to clear benefits coming from the edge data processing when you think about on orbit processing, probably enter with that problem of constraints, resources in multiple areas just to name a few of them.

So you can think about computational resources, especially for AI that’s could be limited for many pays grade platforms. Avail power and to heat rejection capabilities for your satellite platform might be another constraint. Of course, additional mass and volume for additional data processors might be not available in your mission, in your budgets.

So at KP Labs, we designed integrated solutions to cover most of the payload needs by a single device. So you can perform data acquisition from your instrument. You can have large volumes storage, data pre processing capabilities will this enter what is especially important to us AI processing acceleration that’s in a single device.

That’s, what’s normally be essential part of the satellite essential payload computer. So we propose adding said to value to, to, to your normal payload computer. And so we propose here very flexible architecture, consisting of a radiation hardness supervisor to increase robustness of the mission up to choosing ultras caves, MPS, consisting of RMCP and if PJ to cover multiple applications and multiple algorithms and use cases up to one terabyte of PSL, see flash memory reliable on non flash memory and plenty of from, with Earl error correcting codes. And of course, a bunch of external interfaces to keep the solution compatible with different instruments, radio OBCs.

So that Leopards the processing humans I’m talking about Can be all you need for payloads managements with data processing capabilities. And this is exactly what I would like to present. You taking a hyperspectral satellite mission intuition one as the assembler use case does, apart from the platform, satellite pass consists mainly of just free components.

So we’ve got hyperspectral instruments in terms of the electronics, this is just an image sensor and the central part is Leopard GPU. And the third component is very important in onboard data processing the processing algorithms, both classical and the deep learning based. So all those three components cooperate with one another, allowing us to create an onboard data processing pipeline.

So we start with rope frames from the sensor that are transferred to the Leopards, the GPU as an enormous stream of data, like six gigabits per seconds. I ip core in the FPGA part of the Leopard is used to receive and stored the data. Then we may use various pre-processing algorithms perform coregistration of the data radiometric processing, and so on, to obtain hyperspectral cube.

After that cloud mask is created to filter out, use those data to avoid further processing. Also at this tab, we can store the data for further downlink or compress it or so long. And then a clear parts of the imagery of the data are processed in the most important step devalue extraction. So of course we don’t want to downloading the whole hyperspectral picture.

It’s really essential when we are talking about hyperspectral pictures. It may be also very essential for synthetic aperture radar data and other multi dimensional or quite big data. So we want to make use of our segmentation and classification algorithms to select only the data that is really interesting for us.

Basically we can transform hyperspectral sin with multiple bands. Just to one to the image consisting, for instance of pixels classified to the classes that we want to find on the picture. Taking this in numbers, that means that we can squeeze our typical 50 by 50 kilometers scene from 1,500 megabytes to just four megabytes of data.

Of course we need to first train the algorithm. So this is quite important and quite big challenge for in orbit data processing. We need to train the data for practical usecase. Of course we can use the neural network architecture multiple times and the architectures can be reused for multiple problems.

So of course the typical use cases cover agriculture, including the disease detection surface to type classification and even soil parameters mapping that we are working currently on and fine tuning our algorithms to to demonstrate within the practice. But as I said before, when we are thinking about on orbit data processing, we always keep in mind the resources.

So the questions may are is efficient AI influence on the embedded systems suitable for space application. It is possible. So we prove in our research recent. Paper that, that’s true. So benchmarking deep learning for onboard space applications tackles various image processing problems to cover multiple use cases free state of the arts, our deep learning models for air surface segmentation for objects detection are namely in that case, across Trotter detections on the moon surface and the last one for Mars surface classification we’ve proven that performance of over two Terra operation per second is achievable just on the single processing note of the Leopard. And you can have to for our most computing intensive model, we got 20 frames per second of the AI influence and testing different power profiles of the interference.
We show that you can. Lower power dissipation or you can just higher peak performance depending on the budgets of your missions, technical budgets that you have. And what is really nice about this we did all the work on the Leopard GPU, the view of Albert’s model. So the model design for algorithms, prototyping and testing on the grounds compatible with the flight model of the Leopard.

As I said, we are treating the testing on the ground very seriously before you will deploy the algorithm in space. So this is also our solution for that challenge, how to test the things on the ground. But going back again to our challenges all that algorithms that I’ve just shown you normally cannot work without great data.

Data on the proper level of processing. So when you apply the algorithms on the data available on grounds, typical use for instance, level two data or so, but to have the same onboard is essential to perform all the steps on orbits. So you would require radio metric corrections, atmospherically, correction geo-reference and so on, but the cases even the more complex when the preprocessing requires that the data that they’re hardly available on board, like state of the atmosphere to pure pepper from the corrections. So once again here, we decided to use deep learning techniques to solve the issue. Our approach was to train our AI models with different atmospheric conditions to automatically compensate for the effects.

And actually we succeeded in that experiment. We share with you our results in the paper. I really encourage you to read and to apply in your mission, that approach. In this brief presentation I think that’s I showed you that even there are a lot of challenges we believe that’s more missions could benefit from the on orbit processing.

We’ll be able to tackle and we’ll be able to tackle those challenges together. So thank you for the attention.

Hywel, satsearch: Great. Thank you very much. Helena and Michal, that was a, that was really interesting. Thank you. I just wanted to remind everybody that if you have any questions on that presentation or for the rest of the panelists in general, please feel free to use the Q&A function in the bottom of the zoom panel.

And then next we have Zoltan from Lombiq technologies. So it’s, I’ll tell you if you’re ready to share your screen, please.

Zoltán, Lombiq Technologies: Yep. Hi Everyone. So I am Zoltan Lehoczky from Lombok technologies. And I am glad the time here is because while we are not really a space industry player, yet we just entered or just have ambitions to get into the space industry.

If I’m not too far reaching, but I think we can provide a unique perspective from a software development stand standpoint. We are a software development company working on modern high level applications and are Hastlayer tool, can probably provide something useful for onboard data processing in such a way to because day-to-day, they’re actually doing web development, mostly with open source Microsoft technologies. There’s there’s an open source web content management system called orchard. There we are world wide market leaders and we work with companies like Microsoft itself live nation or the Smithsonian institution but we have a couple of R and D projects and one of them is Hastlayer. But before actually talking about, say it’s L let’s talk a bit about what we see is relevant here on on onboard processing in the new space sector. Well. There are FPGAs of course FPGA is, are proven when it comes to their power efficiency and performance advantages and they are widely available particularly I would focus here on the Zynq family of devices as you I’m sure most people here are aware that a lot of new space companies, KP labs included of course have some kind of onboard computer of payloads computer that’s are built around Zynqs, which couple ARM CPU with an FPGA, however FPGA development today and that’s also through, for also through for for ground based processing still the clients requires a specialist knowledge, I would say. There are pre-baked accelerated libraries. There are high-level tools, but still, if you want FPGA acceleration, you need to actually understand and be able to use FPGAs also when it comes to space specifically there are many SDKs every satellites or OBC manufacturer pretty much has their own SDKs.

What’s more to it is that SDKs are usually secrets hidden. You have to purchase the device and, or the SDK to be able to look at it atleast. Now compare this to how app development works otherwise, they are on the desktop in the web for a smartphone. You have all kinds of SDK. Mostly open source readily available. You get all kinds of resources, development tools for free. This is not really of what’s currently in the space sector. But probably we can provide some kind of solution if its the dot-net platform and with our Hastlayer tool and dot nets, if you are not that familiar with it is a software development platform, which is cross-platform around everywhere, including things it’s open source both the SDK and their runtime, because it’s it has runtime because it’s a managed environment. There’s a garbage collection and all kinds of things that make executing your application safe. The most thing that you can do is crash your app but that’s also quite hard and it’s easy to recover, but you can’t crash the operating system. And since it’s a modern platform the development is easy you can just do app development as usual and you get all the modern tooling, you got more than IDs debuggin automatic check, slack, static code analyzers, automatic testing, and everything, that’s in app development currently. And the Hastlayer you can also get automatic hardware acceleration with FPGA so you needn’t needn’t drop the performance requirements because of our Hastlayer is so a that takes a computer program and turns into a piece of FPGA logic.

So not just any computer program of course dotnet but dot net is a platform. It’s not a programming language, actually, a lot of programming languages are supported C sharp is the most popular one, but also C++ for example, or functional languages like F sharp or even scripting languages like Python or PHP, Java Script now I’m imagining writing your, your, you’re a synthetic aperture radar data processing code in PHP. Of course. I wouldn’t to do that though you might have got the gist of it that we are talking about in technical terms. FPGA high-level syntaxes. That is very much focused on software developers and for software developers, you don’t need to understand the FPGA is you just write the code pretty much as usual, but you still get FPG acceleration.

All. What we intend to do, or what we already have actually is that if your algorithm is highly paralyzed and compute bounds, you get to performance increase. You also get to higher power efficiency there, the benefits of FPGAs, but it’s still software development as usual with dotnet with all the modern tools.

I would add that this is not meant for mission critical systems, especially since it’s a managed environments. It’s it’s not determined the runtime, the execution time is not deterministic well of the software part. The FPGA implementation is of course. So a lot of thought, now let us actually switch over to a handson demo and what I’m showing you now is a bit of code.

I hope nobody is afraid of that. And what we see here is an example of an algorithm that would be suitable for FPGA acceleration with Hastlayer. as you all will see, it’s called now spoiler alert. It’s a massively paralyzed, it’s an embarrassingly paralyzed algorithm. It’s an exaggerated example.

It doesn’t do anything to useful. It’s just it’s just as a synthetic example it has some logic here which pretty much simulates that are computing stuff. And it does this in tasks by tasks. Dot net tasks, you can think of them a bit like threads. The point here is that we open or start this many tasks, 280 tasks, and there’s 280 tasks. We’ll run as 280 threads on a CPU, but not at the same time bacause the dot net framework will make sure that there’s no starvation of resources, but still you’ve got those two, four, or how many cores now only FPGA though. You look at a hard level hardware level, parallelism of 280.

Cause if you generate 280 copies of the inside of your algorithm, and this is all standard dot net so I opened visual studio, which is a standard development environment for dot net developers. This is standard C-sharp there are pieces of the Hastlayer API, but the language is the same. Everything happens from codes, you can debug it as codes is just dot net as usual. Except that you can automatically turn this into a piece of hardware with Hastlayer and let’s actually see how that works because I also have an at development boards prepared here. And this is at trends, electoral trends and actual development board here in the middle. They have as Zynq development boards built their boards, that’s built around the Zynq 730 and night opens a remote childhood because it transparently knocks off course. And I will run our demo. What’s happened now is that first that parallel algorithm that I showed you is executed as software as we go. See how the whole thing performs as simple software. Now, this thing has to do our core ARM CPU clock, their own 650 megahertz that’s once we get on the CPU side and on the FPGA, we have the 77 seventies FPGA cloaked at around 150 megahertz.

The whole thing happened. Let’s check all the results now. As we see the software execution was around 25 mil, 25 seconds is there, we do a lot of stuff that’s alright, now let’s check out the hardware execution time, which is down here and altogether it was around, 310, 300 around 313 milliseconds.

The hardware execution is about a hundred times faster than the software execution, which is not a big surprise because we pretty much caught at 280 core process or just for our application. Now, of course FPGA have limits. This is a simple algorithm. It will have, it will be able to have that many copies, but even for more complex ones, you can get a parallelism in the order of dozens, which is still a lot more than what you get with the two cores on the ARM.

By the way, if you are interested, what’s behind the scenes Hastlayer generates VHDL so this is a piece of the VHDL that has your rights. You can you can check off Hastlayer. It’s up on GitHub. I will share the link to it. You can also check out the VHDL code that generates you can inspect how it works, but it’s well, it’s a it’s commented code. And as you can see, it’s also formatted is still generated codes. It’s not that easy to inspect, but you should be able to get the gist of it if you are interested 

alright now. That’s probably cool, but is there anything else? Well yeah,, oh, we actually support the Vitis environment of Xilinx. What this means is that Hastlayer supports every year, Xilinx FPGA with Vitis support with Vitis SDK support which includes not just all the Zynq boards, but also the Allwell cards, for example, the high performance, accelerator cards found in data centers.

So that means that if you are using Hastlayer you can write code for onboard processing in the high level, safe, convenient, environmental dotnet that will around on board of a satellite or a drone or a robots. And you can run the same thing on on the high-performance accelerator card on in the, in ground segment as well in the cloud or on premise the same code.

But of course when running in the clouds, you will have a much bigger, FPGA probably 50 times bigger. And those FPGA is that they support are available in every major cloud. It’s to give you a bit of an idea of the results. Apart from that single sample I showed you, and these are again, a simple algorithms that we have again, up on GitHub.

The benchmarks are, was up on GitHub and their details. LVL cards in ground segments. We get something between four times and 34 times speed increase, and then looking at the power efficiency increase that’s between 20 and 120, which parallel efficiency increases. Nice, of course, but various methods most is that this this corresponds to cost savings. On Zynqs we have actually nicer results. The speed increases between twenty four and 120 a times. The power efficient increases 20 something and 150 times, depending on the algorithms. We want to have more use cases and in-orbit demonstration as well. There’s the next step that we are planning.

If you have use cases, please let us know. I would be glad to talk about it, uh,we are also in a partnership with the Wigner research center for physics allows us to test some scientific computations and FPGAs is, are nowadays in every major data centers. It’s a diverse amount of technology to invest into zoom.

Pretty much that was it for me. If you are interested, please check out the SDK on git hub the source I’ve shown you and all the other examples and the results and everything is up there. And the be ready the, because I think this feels see a lot more about FPGAs both in ground segments and both for onboard processing too.

Thank you very much.

Hywel, satsearch : Great. Thank you very much Zoltan. And that that was a really interesting, it’s great to have a demo of the technology as well. There’s not many pieces of space hardware that we can have a live demo of with, on a webinar. So that was great. Thank you. Okay, so next we we’ll be hearing from a Mathias Persson from Unibap AB, and then just mentioned again, guys, if you have any questions for any of the panelists, please feel free to put them in the Q&A.

Mathias, Unibap AB: Thank you. Alright, thank you very much. And I’d like to also acknowledge my colleague here, Soren Pederson. I think also we’ll be around here during the Q&A session maybe supporting and any questions that you might have and I’m representing Unibap, and Unibap is bringing a cloud technology on orbit to the satellite and extending the possibility of the executing containerized applications on orbit on your satellite. We are a dedicated I would say pay to data processing technology. We don’t replace any onboard computers, et cetera, but could give input to control of things, but it’s purely to bring in sensor data do cloud-based computing using containerized applications, using Docker containers and we are proceed calls and handle the data on orbit as well as queuing up and download data for, to the ground segment. So what we allow them would be to have data preparation and meta tagging on orbit using AI and machine learning. And we can also create a full spectrum of very low latency data products.

The solution itself is next 86 days, heterogeneous architecture. So it’s a lot of heritage. Actually can be deployed on orbit to support your missions. And this is a kind of a schematic to show the different aspects of cloud-based computing in in space where we for instance, may have a satellite using a space cloud you’d find something of interest in the image or in the data stream captured by sensors on board could be optical sensors could be RF based sensors or whatever.

And then that data either could just be prepared and filtered out anything of not relevance. And then you queue up the data to be downloaded or meeting all the data that would be, let’s say, fully cloud coverage MTC or whatever that would be of no relevance or you could actually get very, let’s say low latency data product already manufactured or produced on the satellite using a processing pipeline. And that information could be forwarded using inter-satellite communication, for instance or just queued up as high prioritized, download to the next satellite pass and the readily available for the end user.

Another scenario we also heard earlier about the introducing autonomous operation and autonomous mission or parts of the mission, I would say one way would be to autonomous task, the next satellite in line, for instance, of something in the event of interest to provide tipping and queuing capability where satellites could continuously detect and monitor something on interest without the need of doing processing on ground beforehand, and then reschedule satellites after a certain amount of hours to take care of that in next passes, etc. And this would be of course, very much more important. If you talk about lunar mission or mission on mars or wherever it’s going to be a very low latency in the communication between an operator and satellite, eh, the other aspect of this would be the increasing amount of spatial and spectral resolution on sensors. For instance, I mean it’s a huge amount of data that they can produce with high-frequency and in order to download all that raw data and do the processing on ground, eh, if you have a larger constellation of satellites, it suddenly become a very cumbersome task. Of course, the ground station providers will be very happy about that because it would be requiring a lot of communication.

But on the other hand, we think, a lot of things could be handled on satellite with a smart way of distributing your, or segment your compute tasks. You could actually find yourself in a situation where you have a much lower cost for your mission and or you have data products that are much lower latency to your end users.

And so what we provide there to enable this would be the space flight on computers and the associated ground equipment for development and an operating system and framework SDK for the developers. And then we have a number of different application partners and the growing ecosystem of different applications were already pre let’s say, pre tested pre-configured applications could be used in the processing pipeline on orbit to provide different compute tasks and talk more about that.

The software stack is essentially a Linux distribution that is based on ubuntu. It’s a frozen one where we have some tweaks to the kernel and some specifics to the drivers that we that enabled this to work seamlessly on our implied hardware. On top of that, we have this space cloud framework that allows you to have these cloud computing capabilities to execute code and applications in containers on the satellite, and then a number of different applications or diverse of different applications from here.

We really think this is a key enabler for future developers to actually come from a traditional software development on ground for terrestrial applications, could be image processing programs or other types of applications that you would like to force to the space platform, and that could fairly straightforward, easily be done here using this framework.

This is an example where we are right now flying on the orbit Dauntless David ION wild ride mission. And just as an example, we, within four months from start from actually, when we designed it, where we’re in agreement of what to do, a delivery of hardware satellite and performing integration of the up to 23 different applications that, that has been running on, on, on the satellite.

We have been showing and demonstrating that it’s really possible to to have a very rapid deployment of software quite with our software on board satellite. And just to showcase the small form factor of a half a U size, a computer is called X I X five, 100 in this case.

And we are also in development of next generation of the onboard processing unit, having more than 20 times more capability up to 50 times, more capable than the existing one. So that is in process. So this is an software where we have actually ported the full full software suite of ENVI. And the ENVI /IDL has been put into the space cloud as a application meaning that the full software suite is there and one could write a small application of five to 10 megabytes of size, load and orchestrate that to perform on orbit processing of images captured by, by the sensor. And and this having a, quite, quite a long heritage of as a software on for terrestrial computing this is a example of what we did and it’s a hundred square kilometers satellite multi-spectral image from worldview three, this was canned data.

We did not have the sensor on board on this mission, obviously not the sensor capable of 30 centimeters of resolution, but the, they say image the task was to find the insight mid flight airplane in this image and a rounding algorithm to define, find that need in the haystack, as it was called the challenge to get with SaraniaSat one of our partners.

They developed the algorithm for detecting airplanes with our flights and this is what can be achieved within 10 seconds. Yeah, on XI five, 100, just to give you an example of what can be achieved on orbit. And I think with that, I would stop my presentation here now and maybe have a few discussion points later on and I’m happy to answer any questions.

Hywel, satsearch : Great. Thank you for your time. Oh, sorry. Yeah, thank you. There were a couple of questions asked in the chat function for you. So maybe you could have a quick look or Soren. So yeah, really interested. And then finally we have Edwin from Xiphos. So Edwin, if you’re ready, you could.

Edwin, Xiphos : Yes mean, that’s good. So my name’s Edward Faier I’m the President and Director of business development for Xiphos. I’ll first give a little background on on a Xiphos and products. This is going to bring it the level in terms of the level of granularity right down to be the actual processing boards and some of the low level functions that you can do. Cause that’s what Xiphos does.

We’ve been doing this since 1996. I call us the grandfather’s of Newspace. The idea was to use terrestrial computing products and that’s where communication and bring them up into harsh environments. Of course, it’s difficult to get a much harsher than space.

So effectively what we do is we use industrial grade products components in a fault-tolerant architecture and we use them, our architecture allows you to use these in a space environment. Obviously a much smaller fraction of the cost of a space based solution. So our products are basically processor boards and FPGAs have been given a lot of press pressing this last hour and I will do the same. So our products are based on multiprocessors system on a chip FPGAs, and there are generic computing modules that would be used in a subsystem target applications, our target markets, obviously satellites and increasingly unmanned vehicles, in science experiments on the moon.

So I’ll just briefly go over just a few of the key products and then just to provide context to the rest of the slide in terms of data processing. But one of our products is the Q seven. So it is based on a Zynq a 7020 FPGA. So this is similar to what was mentioned by Zolten and this is a dual core arm processor.

We have some other supervisory functions on board, et cetera, to make this a space product it’s got everything, you need to have a processor, but we also leverage FPGA’s effectively. This is a very small board about a business card sized board weighing about 24 grams. And then effectively what we do is we take most of the IO from sink FPGA, and we bring it out to a what’s called a mezzanine connector on the bottom side of the board.

So they could be used with an application specific daughter board that is typically customized to the app. Another product that we have, this is based on the E theUltraScale+, which was also mentioned in this webinar. So it’d be theUltraScale+ the reason why it’s so interesting is that it’s got effectively seven processors on board.

It’s got four application processors to real-time processors and a GPU, and it has about five times the FPGA logic as the Q seven. So this is probably for very, higher compute requirements. This is an excellent product and it’s seen its way into earth observation systems, SAR systems software defined radio systems and so on. A close variant of the QA is the QA8J this is similar to the QA, except we brought out additional gigabit per second interfaces. Mostly for software defined radio applications. A typical project or a subsystem would include that the processor, which on the left and that would be installed on a daughterboard, which would have the specific IO, the specific connectors form factor functionality that would be required for mission.

Every mission is different at some missions might require mass memory on board, like a solid state drive or so on and different interfaces, different connector types for all applications. The daughter boards are typically custom, but leverage is the IO and the FPGA space and the CPU that are built process reports.

So to do a little tee time, everyone talked about what FPGA is and so on. And so why, what makes them so interesting and useful for advanced data processing. So on the left, you have the Zynq 7020 on the right, you have the UltraScale plus effectively what’s important here is the fact that you have the embedded processor core.

Two in the case of, like I said, the same and up to seven for real, for application processors in theUltraScale+ where do you typically run your operating system, like Linux and so on? As well as all the, you know, standard peripherals that you were required to build a CPU or not, you’re a FPGA.

And then the important part of course, is the programmable logic with built-in memory and with all the logic gates in that flip flops and so on that are part of the subsystem. And of course these subsystems also have embedded hardcores for communications, for gigabit, either net interfaces, USB candidates on, so that functionality can also be brought out into the subsystem.

It’s been touched on, but just to repeat it so why why do you need the advanced data processing?

Today we need an increasingly complex algorithms, sorry. increasingly complex algorithms on them on smaller platforms requiring low power, low space, and a with constraint power. Of course.

Today we have very high resolution sensors. We have software I’m fighting PowerPoint, excuse me, a high resolution sensors and so on. So the ability to pick that data and feed it to a processor you have to do some pre-processing and that’s where the FPGA comes in. So not only do you have to interface to FPGA, it’s a bit to the sensors, but you have to be able to pre process the data so it can be handled by the CPU itself and modern sensors today generate gigabits per second of data. So it’s impossible for process just to keep up. So you need that that logic and other application and software defined radios where you’re not talking about data processing per se, but you are actually processing very high speed, digitized RF, gigabits per second.

So you need that FPGA front end to do the preprocessing. Some examples of where this is used Like I said, interfacing with the sensors, typically earth observation, for example, type application. It looks like my my presentation as it might of tone those various various interfaces to the cameras.

These would include a CameraLink, SpaceWire, LVDS, Gbps transceivers. Then you need real-time process and it has to be performed in the logic of the of the FPGA before the data is processed, is provided to the CPU. So this allows us to use a standard non real time Linux OS combined with the real time front end, that’s happening the, a FPGA in order to in order to do this real-time application.

So some examples of pre-processing that we have done or data processing we have done.

So first I’ve at the very front end, you have to correct the imager. So at that’ll be include some gain and offset adjustment, maybe some lens distortion and correction of the image itself. Cause from the camera y’all might have bad pixels and you don’t want those bad pixels because you’re going to be doing processing on a day to day when you don’t want those bad pixels to infiltrate your data and and reduce the quality with data.

So you have to correct for those. Binning as a very common function where you’re effectively reducing the datasets by factors of four or eight reducing eight pixels into one to it reduces the resolution, but allows you to process in real time Coadding as function where you will you’ll add multiple functions, multiple images together to reduce the signal to noise.

Same with CDI Centroiding, we’ve done. For example, you have an image that you’re looking at looking for the central point of the image. We’ve done applications where we’ve been able to centroid at 30,000 frames per second, using the logic, feature detection, a very important thing for for rovers and so on.

And of course compression and for software defined radios we have the, all of the the building, the DSP building blocks that are required to interface to be the RF transceivers at the front end of the DDCs and decks. So I just want to talk a little bit about hybridization, so hybridization it leverages the tight coupling of processors and logic in the, in, in an FPGA in an MP sock of FPGA.

So it allows you so logic itself, excels in computing things in high volumes, like it was described result and where you have a lot of similar calculations all going on at the same time, CPU’s of course are good at other things. So when hybridization allows you to take a cue card and exploit those on an FPGA, so we’ve developed a methodology where we take conventional C code, we’re able to profile that C code and see where the processor is actually spending a lot of its time and then identifying those pieces that, that that are amenable to be poor to to, to an FPGA.

And then we will implement that in VHDL then that VHDL code is Th the software’s updated to access the VHDL code as opposed to be the software library or the software function. And then you ended up with an accelerated application. So as an example so the top part of the chart shows what CPUs are good for.

They do one, one, they do one app, one operation at a time. So for example, if you have three operations after the fourth cycle, you’ll have your open, whereas in a seep in, and the FPGA you can load that pipeline with data, every single clock cycle. So instead of having one result after the pipeline is filled, instead of having one result, every four o’clock cycles, you’ll offer result there for each every clock cycle.

So that’s just an example of pipelining and how it can be improved in an FPGA. If that works well, then what works even better is if you then expand that. And then so now you’re able to process multiple data streams at the same time. And that is the advantage of hybridization. So what we’ve done is we’ve taken various algorithms that for example, the space agency, our agency and Canada had particular interest in a variety of particular algorithms.

And what we did is we hybridized those algorithms and we compared it to their operation on the typical development environment, which is, you develop your algorithms on a PC running on an i7 PC and you test that your algorithm, but then the problem is okay, how do you get that running on a space processor?

That’s running two Watts as opposed to my a hundred Watts Intel I7 . So we’ve hybridized these as examples of these various algorithms and you can see their performance. If you look at the first column that shows the performance versus a, an i7 running at 3.46 gigahertz run running multiple hundreds of Watts versus running on a two Watts.

Which is obviously very applicable for either satellite or a row for applications. So you can see that in general. It depends on the application. It depends on the, sorry on the algorithm itself. Some are more amenable to the advantages of hybridization others, but you can see that in general performance can even meet the real-time performance can either be met with a two watt processor or we, in some cases, even multiple multiples.

And of course the big the big saving was, is on power. So if you looked at the third column that would show you the power savings. And I went running on a two Watt processor versus the i7 and effectively it makes the difference between a mission being here to to happen or not to give another example, this is a product that we have a system we have developed called EVO.

So it uses a hybridization of various algorithms. So it performs what’s called a visual Odometry so you’re on the moon. There’s obviously no GPS. You have to be able to know where you’re going and keep track of where you’re going. So that’s what embedded vision, that’s what visual Odometry is. So it acts as a sensor that gets connected to the rest of the GNC of the of the Rover to do to do local to tell the Rover where it is. So what on the right side. So it’s basically a stereo camera, which is connected to a Q7 board called a, a camera board, and we ran it through its paces, the cane it’s basically, as you ratchet with spaces and what in that graph. You see a chart of that ground truth, which is basically a GPS measurement of localization through a specific route.

And what green is the results from Evo. So what you have here, like something typically again, would run on a laptop. That’s sitting on top of the Rover. In the case of a terrestrial application now is able to run it in real time. This was actually operating at 11 Hertz able to operate in real time to be able to help localization.

We also did some other interesting things with this, with Evo. So that little video on the bottom is actually our hazard detection and avoidance algorithms that are running. So because we’re using stereo, we’re able to localize if there’s a hazard and it’s put into the GNC to actually stop the rover.

Before it does itself some harm. It was tested with various types of obstacles and we’ve also done runs an algorithm as well to do disparity map mapping and 3d point cloud. So you can actually get a 3d point cloud. For example, when the Rover’s at rest, you can get a 3d point cloud to be able to plan the science.

The whole idea for Evo is to get better localization and to support more autonomy because the more autonomy a Rover has the more science the scientists can do. So as you can see the average error was about 1%. And again, this is on that because this was using the Q7 and the performance would even be better the Q8 and we did testing with a Rover At six kilometers an hour, even though we actually tested up to to 10 and 15 kilometers an hour. Now there’s not going to be too many rovers, taking a joy ride on the moon. Six kilometers an hour is certainly much more than needed.

Just to touch on all the other elements. There’s been a lot of talk about AI and we’re not an AI company. But what we want to do is enable our customers to use AI in their platform. So we’ve poured it Vitis AI, which was mentioned as well to the QA. So if I say I support, various frameworks like TensorFlow and caffe and so on, and it provides the the the unified software platform provides the various elements that are required to develop AI application and get it running on the, for example, theUltraScale+ what’s interesting about the, about Vitis.

It uses something called a DPU, which is a deep learning processing unit fact that we, these deep user instantiated into the logic and they act as a, as an artificial intelligence co-processor to the application processors in theUltraScale+. So we, the compiled data, maybe this DPU has shared access to the DRAM on the board, along with the host CPU and then together. So this coprocessor will run the compiled code that is generated by the Vitis toolset within the, again, it acts as an AI coprocessor. Just as a, as an example, because again, we’re not the, you don’t, we don’t develop our own AI applications, but we have a company one of our partner companies out in Ottawa, but a couple of hours away from us and they’re developing some very interesting algorithms using AI.

And this is running a year on the Q7 and Q8. So he was just setting that example so effectively what they’re doing is they’re doing terrain classification. So in real time, the ability to use generally could be the navigation cameras, could be a hyperspectral camera, whatever type of cameras, onboard the Rover to identify and classify. So what on that?

For example, if you look on the bottom, the picture on the left is the image from the cameras. But what on the right is a real time overlay. We had used AI to identify the terrain. And so for example, whether it’s regolith, whether it’s a crater or the interior of a crater, exterior, and it’s color coded for the ease for the operator and the Intent here, just Evo is to accelerate the science.

Another thing that is that it performs is something called Novelty detection. You expect regular. If you expect a crater, you expect the bolder, but you may not I expect, a small meteor if that’s in the frame, so the novelty detector will provide that information to the scientists.

And so again to, quickly more quickly decide on the science that they want to do. Anyway. So they use various various networks, various algorithms for this, which are indicated here. So th they’re going to be running this this is actually going to be flying on a mission shortly the Emirates lunar mission on their Rover on a Q7.

So again, on that two Watt processor and as well, they’re doing work with our Q8 processor in Iceland right now in their own indoor own moonscape in their office building, which is always fun. And Now for them Vitis AI wasn’t enough because they wanted it to be a little more hardware and platform and cyst and processor agnostic.

So they’ve developed their own tool chain for this to make it a little more general, a generic it’s based on this, but based on this NNE F format I’m not certainly not an expert at this, but, please if you’re interested in any information, either about help with getting AI algorithms implemented on a space platform, a low power space platform, please reach out to mission control.

I have Michele’s contacts at the bottom of the screen either for if you need support with the AI algorithms or to access their tool chain. So just in conclusion besides realizing that I lost the battle against PowerPoint in conclusion, so hybrid processing. The hybrid processing that’s KPIs that you can do in a and an FPGA and multiprocessor system, FPGA can be leveraged to perform that fast data processing.

So in case of Q7, Zynq 7020,in the case of the Q8 that’s the UltraScale+ generally the real time processing is performed in the logic before the data is provided to the CPU. So that allows you to use standard non real time operating systems that eases development costs the easiest development and your builtin costs.

You can use tools like hybridization to to actually get very complex algorithms running in real time on the processor. And again, inference can be done very quickly and cheaply. I don’t know, FPGA so to get AI applications running. So whether it be through Vitis AI, custom tool chains from third parties, That’s it.

Thank you very much. Thanks for the opportunity. I apologize for my battle with PowerPoints, but thank you for your time, everybody.

Hywel, satsearch : Great. Thank you very much, Edwin. That was that was really interesting to see the results there and lots of those examples. Really appreciate that. Yeah.

Thank you to you and to all of our presenters. That was our final talk today. I’m just going to share my screen. Yeah, that was our last presentation today. But as I’ve mentioned, throughout the Q&A text function is active, it’s been running throughout. Bef if you have any final questions or any general questions you’d answered please feel free to ask our presenters today.

And just to give some time for people to ask questions based on, Edwin’s presentation there and for the rest of the presentations I’m just going to discuss a little bit, summarize some of the key points from the session today, and then talk a little bit about our work at satsearch and some things we have come in.

So firstly, today we heard from Helena and Michal from KP labs and they discussed that the reasons why we could have so many satellites per year with onboard AI that uses the requirements for that. The talk about the applications of the technology, deep space missions, earth observation, human space flight, et cetera.

The challenges that that the systems. Based, constrained resources, a certain amount of volume, power, et cetera, that’s required. And the process in pipeline, an example of the process and pipeline. So I was really interested to see how the stages that the systems must go through in order to access provide useful, valuable data. So that was really interested in, it was good to see a KP labs is a portfolio there as well. Next we heard from Zoltan of Lombiq technologies who had discussed the challenge of onboard data processing from a software development perspective. He talked about the benefits of FPGA and the contrast really between a space software development and web app development and also gave us a really good live demo of a Hastlayer, demonstrating the value of this form of hardware acceleration and what the system can achieve.

That was great as well as touching on a bunch of of other technical aspects there. And then we heard from Mathais Persson from Unibap who discuss them, the company space, clouds ecosystem, the cloud-based computing and data processing in space, including applications like meta tagging and autonomous tasking and intelligent ground tasking as well.

Yeah, it was really interesting to learn more about the space cloud ecosystem and the software stack or the different parts work together. And he also showed us some really interesting examples of data. The geo located in the aircraft mid-flight was a really interesting, I know that was managed on CAN data, but as it has been discussed is a case of radiation, hardened and testing and getting these things fly in.

And we’ll have applications that can be accessed on much shorter timescale than using CAN data. So that was brilliant. Finally, we Edwin from Xiphos,, one of the self-proclaimed grandfathers in new space who discussed with us some of the the technical specifications of the companies processing hardware, the use of FPGAs for effective preprocessing and in other parts of parts of the data processing chain, the FPGAs as you will have realized, came up many times through, throughout the talk today.

So I’m really interested to learn more about the technology. Edwin also discussed how logic is leveraged in order to process high-speed data and algorithm, hybridized, algorithm hybridization where computation is shared between CPU and pro programmable logic and what benefits this brings as well to applications.

And then he gave us some examples of those applications as well. Particularly with the use of data using AI processes particularly the rover navigation and research that can be carried out on rovers. That was fantastic. So yeah, I hope you enjoy hearing about all of these different applications in areas that you remember you will get versions of the different presentations, the slide decks available for you, and we’ll also provide the video recording of the session. So please look out for that in a follow-up email. And just before you, just, before you go, I want to share with you details of our next webinar. Some of the applications today touched on earth observation, in fact, quite a few of them in the applications discussed.

But of course the earth observation core technology in that whole set of application. Oh, the cameras. So the topic of our next webinar, which is on the 15th of December is going to be a guide to selecting earth observation cameras for satellite missions. As I say, 15 of the December, 2021 at three o’clock central European time.

And with all the different options on the market today and the complexities of using different subsystems, aperture sizes, satellite form factors, it could be quite a tricky task, possibly an increasingly tricky task to select the right payload for mission. So again, we’ll be hearing from experts while they’re all listed their Berlin space technologies, Dragonfly aerospace, Redwire Satlantis, SatRevolution, Simera Sense.

If you if you’d like to join us for that webinar as well in December, you can register at the link that’s should be provided there in zoom. Yep. Fantastic. And. Yeah, we would love to see you all again. And of course, in the meantime the satsearch webinar series is just one aspect of our work, trying to open up and develop the space industry as much as we can.

So here’s a few other quick notes on how the satsearch works and how you can get involved in the marketplace for space. Firstly, if you are a space industry supplier yourself, or you represent one and you’d be interested in listing your own products and services, please do take a look at the membership information and the process, the application process there to discuss how we might be able to help you access the global industry.

Secondly, and possibly more likely for the audience here is if you are an engineer, researcher or potential buyer in the space industry, then you can find out more about the technologies that we’ll discuss today. And about thousands of other products, services companies from all around the world, on our platform, satsearch.com.

The platform includes a free request system that you can use to request technical details, documentation, company, introductions, quotes, or information on lead time or anything else that you might need for trade studies and procurement purposes. And finally, just to stay up to date with our work that the content we put out in the market in that we carry out.

There are multiple different ways to get in touch with us, like any company online today, but just to focus on three quickly, we have a podcast called the space industry where you can hear in-depth discussions on space technologies and firsthand experiences from companies across the world. And actually several of today’s presenters have spoken on that podcast over, over the last 12 months, since we’ve launched it.

So that would be great. If you could, you can find that on at the link in the chat or on any good streaming platform. Of course. Then there’s our weekly newsletter. We share the trending stories from around the space industry and insights from our members and our work too. So that’s just a free email sign up there and finally, as well as social media, there is our slack channel where, which is open and you can register for that and interact with the satsearch community and discuss anything about the space industry that you’d like. We’re a, quite a friendly bunch there. Yeah, we would love to see you there as well. If you’d like all these links are in the chat and I think that’s thanks again for spending time with us today.

artificial intelligence
earth observation
payload processors
supply chain

related articles

Blog home

Microsatellite and CubeSat platforms on the global market

CubeSat thrusters and small satellite propulsion systems

Ground station service providers: an overview of telemetry and telecommand communication services and networks for small satellites