Streaming & Data Processing
SDL provides a real-time event streaming and transformation hub that ingests data from sensors, full-motion video (FMV) metadata, cyber feeds, and operational systems. The hub converts between 14+ tactical data formats bidirectionally, enabling seamless interoperability across coalition partners, echelon boundaries, and heterogeneous C2 systems.
Path for Data Flow
Data entering SDL follows a processing path depending on the mission requirement.
The Native Transformation Path converts directly between external formats without persisting data in a canonical model.
-
Inbound format received — data arrives in Format A (e.g., CoT XML from a TAK server).
-
Direct conversion — the transformer converts Format A to Format B (e.g., GeoJSON for a mapping application).
-
Outbound delivery — the converted payload is delivered to the destination system.
This path is used for coalition interoperability scenarios where data must be relayed between partner systems in their native formats with minimal latency.
Supported Format Transformations
| Format | Description |
|---|---|
CoT XML |
Cursor on Target XML — the standard position-reporting format used by TAK ecosystem applications. |
CoT Protobuf |
Binary protobuf encoding of Cursor on Target messages for reduced bandwidth and faster parsing. |
GeoJSON |
Open standard for encoding geographic features. Used by web mapping applications and GIS tools. |
Lattice |
Data exchange format for mesh networking platforms. Supports entity state, sensor data, and command-and-control messages. |
Foundry |
Ontology-based actions and object format for analytical platforms. |
ISA XML |
Intelligence, Surveillance, and Assessment XML format for structured intelligence reporting. |
ETF Proto |
Entity Tracking Framework protobuf format for high-rate entity tracking. |
OMNI/TRAX |
Track management format supporting multi-source track correlation and fusion. |
TAK Data Package |
Bundled data packages containing map overlays, imagery, and mission data for TAK clients. |
CATAPULT IIR |
Intelligence Information Report format for structured dissemination of intelligence products. |
Additional formats |
The transformer framework is extensible. New format converters can be deployed as containerized plugins without modifying the core platform. |
Connector Ecosystem
Connectors handle the transport-layer integration between SDL and external systems. Each connector manages connection lifecycle, authentication, backpressure, and retry logic for its protocol.
| Connector | Description |
|---|---|
Event Streaming |
Consumes and produces messages on streaming topics. Supports consumer groups, partitioned ordering, and configurable batching. |
TAK Server |
Connects to TAK servers with protocol auto-negotiation (TCP, TLS, WebSocket). Supports both CoT XML and CoT protobuf wire formats. |
Lattice SDK |
Integrates with mesh networking platforms via their native SDK for entity exchange and command relay. |
Foundry Ontology Actions |
Pushes and pulls objects and actions through the analytical platform’s ontology API. |
Gotham Maps / TWB |
Streams geospatial layers and track data to and from operational mapping platforms. |
TRAX gRPC Streaming |
High-throughput gRPC streaming connector for track management systems. |
TCP |
Raw TCP socket connector for legacy systems that communicate over plain TCP streams. |
PostGIS |
Reads from and writes to geospatial databases using SQL with spatial extensions. |
Security Markings
Classification markings are preserved and propagated through every stage of the streaming pipeline.
-
Inbound markings — when a source record includes classification markings, those markings are extracted during ingestion and attached to the output message.
-
Marking modes — operators configure one of four marking behaviors per connector:
-
Add — append markings to existing markings on the entity.
-
Remove — strip specific marking categories (for downgrade scenarios with appropriate guard validation).
-
Overwrite — replace existing markings with the connector’s configured markings.
-
Ignore — pass through without modifying markings.
-
-
Outbound propagation — when data is transformed to an external format, the markings are translated to the target format’s marking schema where supported.
All marking changes are audit-logged with the source, destination, and policy that authorized the change.
Performance & Processing Modes
The streaming hub supports several processing modes to balance throughput, latency, and ordering requirements.
Batch Processing
For high-throughput ingestion scenarios, records are grouped into configurable batches before processing. Batching amortizes per-record overhead and increases throughput for bulk data loads, historical replay, and sensor dump ingestion.
Schema Registry
SDL maintains a schema registry that stores versioned schemas for every data format flowing through the platform.
-
Schema validation — inbound records are validated against their registered schema before processing. Records that fail validation are routed to a dead-letter topic with diagnostic metadata.
-
Schema evolution — the registry supports backward-compatible schema changes. New fields can be added without breaking existing consumers.
-
Schema discovery — operators and developers can browse registered schemas through the platform’s API to understand available data formats and their fields.
Correlation & Rehydration
Multi-Source Correlation
When multiple data sources report on the same real-world entity (for example, two sensors tracking the same vehicle), the streaming pipeline correlates these reports into a single fused entity. Correlation rules are configurable and can match on geographic proximity, entity identifiers, temporal overlap, or custom attribute comparisons.
Entity Rehydration
Entities that arrive as lightweight position reports (minimal projection from a bandwidth-constrained link) can be rehydrated with full attribute data from local storage or upstream nodes. Rehydration is triggered automatically when an operator requests detailed information about a minimally projected entity, or on a scheduled basis to pre-populate the local entity cache.
Next Steps
-
Understand how streamed data is governed by policy: Policy Engine & Data Governance
-
Learn about data tiering and storage for processed data: Data Tiering & Storage