AirStack Core Software

Operating System

AirStack CoreTM is the runtime and application programming interfaces (APIs) to power high-performance RF AI computing.  The bundled APIs provide direct AIR-T tuning and control of the SDR, AI models, and custom application development capabilities.

AirStack Core software is the operating system for Deepwave’s platform.  Core’s unique combination of capabilities enables the analysis of RF signals at the edge, allowing only AI-generated results to be shared with connected IT systems.  Sending only AI model results means the data output from the AIR-T is greatly reduced and – more importantly – low-latency intelligence is delivered rather than large volumes of raw RF (IQ) data.

    • Delivers RF insights via high-performance AIR-T computing, powered by a patented data flow management system
    • Fine-tune analysis capabilities through direct control of SDR and AI models
    • Simplified provisioning and management of RF edge applications
    • Developer-friendly, API-centric approach for SDR application development
    • Supports secure, air gapped environment operations
    • Easy deployment – no need to go deep into developer documentation or system admin tasks.  Just flash and go

How It Works

AirStack Core provides the software foundation for delivering RF Intelligence. The Core operating system is based on NVIDIA Jetpack and includes support for the NVIDIA® Jetson Orin™ platform.

The product is built on the latest Linux kernels and the latest Ubuntu LTS releases to enable customers to leverage the latest advances in AI toolboxes and frameworks. It features a modular, cloud-native architecture that enables containers, microservices, and DevOps / CI/CD to deliver scalability, resiliency, and agility to the AIR-T platform. In addition, it integrates the latest NVIDIA AI software and hardware stack to enable seamless execution of AI workflows.

 

Noteworthy Features Include

  • OS customizations for high-throughput RF data processing
  • DMA zero copy
  • Embedded Signal and AI processing on the same, unified system
  • Support for industry-standard frameworks for AI model deployments
  • Offers full desktop computer power in a compact edge form factor
  • Containerized and bare-metal application development support
  • NVIDIA AI Enterprise and Jetson ecosystem support

AirStack Stackup Blocks

Supported AirStack Tools

AirStack Core is the base operating system for the Deepwave ecosystem. It supports the following AirStack tools.

AirStack Edge™

AirStack Edge is the remote orchestration layer for managing fleets of deployed AIR-T devices. It integrates with AirStack Core, providing a unified management platform for scalable deployment, configuration, and continuous maintenance of RF Intelligence applications across multiple edge nodes.

Explore AirStack Edge

AirStack BitStream™

AirStack BitStream is the FPGA application development framework used to maximize the AIR-T’s performance. It enables ultra-low latency RF data pre-processing on the integrated FPGA, which significantly frees up AirStack Core’s CPU and GPU resources for advanced AI functionality.

Explore AirStack BitStream

Support for Industry Standard Tools

Below is a selection of some of the APIs supported within AirStack Core.

AI Computing Tools

CUDA

The NVIDIA CUDA® Toolkit provides the core development environment. It includes the compiler, libraries, and debugging tools necessary to create and optimize GPU-accelerated applications on the high-performance AIR-T platform.

Explore CUDA

TensorRT

NVIDIA TensorRT™ optimizes and accelerates deep learning models. It ensures low latency and high throughput for RF Intelligence, delivering production-grade performance on the AIR-T.

Explore TensorRT

ONNX Runtime

ONNX Runtime is a high-performance engine for deploying and running models from any AI framework, like PyTorch or MATLAB. It is crucial for cross-framework compatibility and scalable RF Intelligence deployment on the AIR-T.

Explore ONNX Runtime

AI Model Development Tools

PyTorch

PyTorch is the Python-native deep learning framework used to create custom AI applications. AirStack Core then enables these trained models to be deployed efficiently for real-time RF Intelligence on the high-performance AIR-T.

Explore PyTorch

TensorFlow

TensorFlow is a production-focused framework for creating custom AI applications. It is often preferred for its robust deployment and scaling capabilities, making it ideal for transitioning RF Intelligence prototypes into production environments on the AIR-T platform.

Explore TensorFlow

MATLAB

MATLAB, with its Deep Learning Toolbox, is ideal for Research and Development (R&D). Engineers can design and test Signal Analysis models, then export a portable ONNX file for deployment on the high-performance AIR-T platform.

Explore MATLAB

Digital Signal Processing Tools

CuPy

CuPy is a Python library that enables GPU-accelerated computing with a NumPy-compatible API. It is vital for high-performance digital signal processing (DSP), allowing engineers to execute complex signal analysis and numerical tasks much faster than on a CPU.

Explore CuPy

GNU Radio

AirStack Core fully supports GNU Radio, a widely used, open-source modular signal processing toolkit. This allows engineers to develop and execute complex signal analysis and processing applications in Core using native C++ or Python.

Explore GNU Radio

Frequently Asked Questions

How does AirStack Core ensure a superior ROI and competitive advantage at the RF edge?

AirStack Core’s patented data flow management system significantly increases ROI by reducing the need to transmit high volumes of raw RF (IQ) data. The Core operating system enables embedded signal and AI processing on a unified system, meaning only low-latency, AI-generated intelligence is shared with connected IT systems. This radically reduces data output size, minimizes network bandwidth costs, and delivers time-sensitive Electromagnetic Spectrum Monitoring insights immediately at the tactical edge, ensuring market leadership and competitive advantage.

How do AirStack Core and AirStack Edge streamline secure deployment and IT simplicity for my RF AI systems?

AirStack Core provides a modular, container-friendly operating system that integrates seamlessly with AirStack Edge, which is the remote orchestration layer. This combination delivers IT simplicity by centralizing device management, configuration, and maintenance for fleets of AIR-T devices operating in the field. By leveraging a unified platform for deployment and enabling secure, air gapped environment operations, your organization gains the scalability and resiliency necessary to manage complex, distributed RF Intelligence missions without increasing system administration overhead.

What unique technical capabilities does AirStack Core provide for low-latency RF Signal Analysis?

AirStack Core is built on NVIDIA Jetpack and features critical OS customizations for high-throughput RF data processing. Its architecture facilitates DMA zero copy, which is the key to minimizing latency. This allows for direct data transfer from the RF receivers to the GPU/CPU shared memory, enabling immediate use of industry-standard frameworks like CUDA, TensorRT, and CuPy for accelerated digital signal processing (DSP) and real-time Signal Analysis and AI model execution on the same unified platform.

How does AirStack Core simplify deployment and management of RF Intelligence applications across a fleet?

AirStack Core simplifies the entire project lifecycle by providing a modular, cloud-native architecture that supports containerized and bare-metal application development. This foundation enables easy integration with AirStack Edge, which serves as the remote orchestration layer for fleet management. The “flash and go” deployment model and simplified provisioning mean developers avoid complex system admin tasks, resulting in faster deployment timelines, lower operational overhead, and guaranteed system scalability and resiliency.

× large picture