The Moment Has Arrived: AirStack 1.0 is Available

John Ferguson
February 26, 2024   

John Ferguson – CEO

I am extremely thrilled to announce that AirStack 1.0 is now available for all Deepwave platforms! Does this mean that AirStack is bug free and feature complete? Not just yet, but we are continually working towards that goal.

So, what does this mean? It means that the AirStack development team has added the large majority of features for AirStack and, to signify that milestone, we are proud to announce AirStack 1.0.

Along with the new features listed below, we have added many new tutorials and documentation to assist our customers in implementing the new and exciting capabilities.

Test Drive the New Timing API

Accurate timing is crucial in software defined radio (SDR) applications, especially in scenarios where multiple devices need to work seamlessly together, such as in communication networks, radar, or geolocation systems. The timing API enables synchronization at the nanosecond level, ensuring precise coordination and even distributed phase alignment. This accuracy is particularly vital in applications like spectrum monitoring, distributed coherent processes, and geolocation.

With this release, Deepwave is happy to announce the new time application programming interface (API) that allows for the precise starting and stopping of receive and transmit streams. The integration of a timing API in the AIR-T offers numerous benefits, enhancing the overall functionality and efficiency of the radio system. With respect to the AIR-T Edge Series, the combination of this API with the embedded GPS Disciplined Oscillator, creates a turn-key solution for a wide array of distributed timing applications including beamforming, time-difference of arrival, and mesh networking.

Please see the following tutorial for more details and implementation:

Perform AI Inference using NVIDIA Triton Server

NVIDIA’s Triton Inference Server is an open-source inference serving platform that facilitates the deployment of machine learning models at scale, providing efficient and scalable inference services for a variety of AI applications.

As part of the AirStack 1.0 release, we are enabling customers to create a Triton Inference Server on the AIR-T. This is only possible because the AIR-T has an embedded GPU built into the radio.

Utilizing NVIDIA Triton on an AIR-T presents several benefits, especially in the context of deploying machine learning models for radio signal processing. Triton’s robust inference serving platform enables seamless integration of AI models onto the AIR-T, enhancing its capabilities for tasks such as signal classification, modulation recognition, and spectrum sensing. The platform’s efficiency in managing and scaling machine learning inference ensures real-time processing of radio signals, contributing to improved situational awareness and decision-making in dynamic communication environments. Additionally, Triton’s compatibility with various deep learning frameworks and support for GPU acceleration on NVIDIA GPUs further accelerates the execution of complex models on the AIR-T, maximizing performance for AI-driven radio frequency applications. The deployment of NVIDIA Triton on an AIR-T empowers advanced AI capabilities, enabling more intelligent and adaptive radio communication systems.

Please see the following tutorial and related articles for more details:

Containerizing your AI Applications

Docker is a containerization platform that enables developers to package applications and their dependencies into portable and self-sufficient containers, ensuring consistent deployment across diverse computing environments.

Deploying edge applications within Docker containers offers several advantages that streamline the development, deployment, and management processes. Firstly, Docker provides a consistent and reproducible environment, ensuring that applications run reliably across diverse edge devices, irrespective of the underlying infrastructure. This portability simplifies the deployment pipeline and mitigates compatibility issues. Docker’s lightweight nature allows for efficient resource utilization on edge devices, optimizing performance and minimizing overhead. The containerization approach also facilitates easy scaling of applications, enabling seamless replication and distribution across a network of edge nodes. With Docker, developers can encapsulate application dependencies, making it simpler to manage and update software components. Additionally, the isolation provided by containers enhances security by containing potential vulnerabilities within the confines of the container. In essence, deploying edge applications in Docker containers enhances portability, scalability, security, and overall operational efficiency in edge computing environments.

AirStack has long supported virtual environments, such as Anaconda, and Docker containers. Along with AirStack 1.0, we are providing an official tutorial for creating and deploying your AIR-T applications in a containerized environment. The tutorial demonstrates how to set up and run a docker container on your AIR-T that has properly exposed the radio drivers to the container. Please see the following tutorial for more details and implementation:

Additional New Tutorials

  • Analyzing Unknown Signals — To demonstrate the ease with which 3rd party open-source applications may be run on the AIR-T product line, we have created a tutorial to install and run SigDigger. SigDigger is a free digital signal analyzer for GNU/Linux and macOS, designed to extract information of unknown radio signals.
  • Full AIR-T Backup — In this tutorial we show you how to create a copy of your AIR-T by backing it up and restoring it. This method can be useful when software configurations, programs, and files must be replicated on other boards of the same model.

Official AirStack 1.0.0 Release Notes

  • NVIDIA JetPack 4.6.4 — AirStack 1.0 upgrades the base operating system from JetPack 4.6.0 to JetPack 4.6.4. This includes a real-time patched version of Linux For Tegra as well as significant updates to GPU acceleration libraries such as TensorRTCUDA, and cuDNN.
  • Time API — We have implemented the SoapySDR Time API on the AIR-T. This combination of firmware and software allows for the internal hardware clock on the AIR-T to be set and used for a wide variety of applications. See the Time API Tutorial for details.
  • Timed RX Streams — RX streams can now be activated at a specific time, meaning that the time that signal reception begins can be controlled by the user.
  • HW Triggered TX Buffers — Support for HW triggering for TX channels has been added, mirroring the functionality of the RX channels. See the updated Transmit Tutorial for details.
  • Timed TX Buffers — Like with RX streams, TX buffers can now be transmitted at a specified time. Currently there is a limitation of queuing up only a single buffer at a time that will be addressed in a future AirStack release.
  • Multi Channel Sync — Using the timed functionality for RX or TX can now be leveraged to allow for multiple channels in a given stream to be synchronized to one another. An example of how to achieve this can be found in the new Time API Tutorial.
  • Limited Timestamp Metadata — When using the Time API, RX streams will report the time the stream was activated (i.e., the time the first sample was received) and TX buffers will report the time the buffer was sent to the RF transmitter. This functionality will be expanded upon in a future AirStack release to include timestamps for each buffer returned from readStream() for RX streams.

Download

AirStack 1.0 is available for customers to download in the Developer Portal.

Please note that upgrading to AirStack 1.0.0 from previous versions of AirStack requires a re-flash of the operating system in addition to the usual firmware update. Please see the installation procedure to apply the software update to your AIR-T, followed by the firmware update procedure.