SDL Developer Guide
Welcome to the SDL Developer Guide. This documentation is designed for developers, system administrators, and technical personnel responsible for installing, configuring, deploying, and maintaining SDL (SOF Data Layer) systems..
Purpose of This Guide
This developer guide focuses on installing, configuring, and deploying SDL platforms. If you’re an end-user looking to operate an already deployed SDL system, please refer to the [R]DP User Guide.
What is SDL?
SDL is a modular, container-native data platform that enables organizations to collect, process, fuse, query, and disseminate mission-critical data — from cloud data centers to disconnected tactical edge nodes. The platform is decentralized by design, capable of running on tactical edge, on-prem, or cloud services, and can be deployed in 100% air-gapped environments.
SDL federates data across distributed nodes while maintaining data governance. A single software baseline adapts its runtime behavior to the deployment environment.
For a comprehensive architectural overview, see Platform Architecture.
Who This Guide Is For
-
Developers: Software engineers building custom integrations and data pipelines
-
System Administrators: Personnel responsible for deploying and maintaining SDL
-
DevOps Engineers: Staff managing CI/CD pipelines and infrastructure
-
Data Engineers: Technical staff designing data flows and transformations
-
Technical Architects: Decision-makers evaluating and designing SDL deployments
Key Platform Capabilities
Streaming & Transformation
Real-time event streaming and transformation hub supporting 14+ tactical data formats. Hub-and-spoke architecture with bidirectional format conversion. Learn more about Data Pipelines →
Federated Query & Analytics
Federated SQL engine, virtual knowledge graph (VKG), and real-time analytics. "Zero ETL" — data stays where it lives. Learn more about Federated SQL →
Security & Governance
Policy engine with row-level and column-level data obfuscation, classification markings, and multi-enclave evaluation. Data-policy-as-code deployed alongside the platform. Learn more about Security →
Getting Started
SDL is packaged as a collection of Helm charts located in $RDP_HOME/charts.
Environment-specific overrides live in $RDP_HOME/overrides/ (e.g., sdl-dev, dev-minimal).
Quick Start
# Clone the repo and set RDP_HOME
git clone https://github.com/raft-tech/sdl.git
export RDP_HOME=$(pwd)/rdp
# Provide credentials for the GitHub container registry
export GHP_USERNAME=your_github_username
export GHP_SECRET=your_github_personal_access_token
# Create a local Kind cluster
${RDP_HOME}/scripts/rdp_create_kind.sh
# Deploy SDL (use dev-minimal for a lighter footprint)
${RDP_HOME}/scripts/rdp_deploy.sh -e dev-minimal
For prerequisite CLI tools (kubectl, helm, kind, jq, yq) and detailed environment setup, see Tools & Environment.
Two Paths for Data Flow
SDL’s architecture provides principled flexibility: enforcement where it creates value and flexibility where it enables mission.
Path 1: Data Model Path
Feeds from tactical systems (TAK, GCCS-J, TRAX, and others) pass through configurable transformers that map source data to a common data model. Organizations choose one of two options:
- Bring Your Own Data Model (BYODM)
-
Organizations can replace the Warfighting Data Model with their own model. The platform is unopinionated about which model is used — it enforces whatever model the organization configures (e.g., OMS/UCI or other domain-specific models).
This is an organization-level choice, not a per-deployment choice. When an organization selects a data model, all deployments across that enterprise use the same model, ensuring complete internal consistency while preserving flexibility across different organizations.
Path 2: Native Transformation Path
Data can flow through transformers without being forced into the data model. This is not a gap — it is a deliberate design decision that enables critical interoperability scenarios that forced conformity would break:
-
Coalition interoperability: Format bridges between allied systems with different standards
-
Legacy system integration: Connecting systems that cannot be modified to speak new protocols
-
Exploratory data: Ingesting new data sources for analysis before committing to model representation
-
High-fidelity passthrough: Preserving source format when downstream systems require it
|
Forced conformity to a single model is what creates stovepipes, because it prevents systems from interoperating with partners who use different models. Flexibility is not the enemy of interoperability — it is what enables interoperability in a heterogeneous coalition environment. |
Semantic Data Layer
Both paths integrate with SDL’s semantic layer through Ontology-Based Data Access (OBDA):
-
Path 1 data: OBDA maps modeled entities directly to BFO/CCO ontology terms
-
Path 2 data: OBDA applies virtual ontology projections, interpreting native format fields as ontology concepts without requiring transformation
This means an analyst can issue a single SPARQL query that returns data from both paths — despite different sources and incompatible schemas. The analyst doesn’t need to know which path the data took; they query the ontology and OBDA handles the rest.
Working with Data
DataSources
The SDL Catalog uses DataSource to curate its data connections with the world. DataSources require an Enablement to set up the data connection information (e.g., URL, access key, configuration, etc.). This step is typically done by an admin or a user with an enablement role prior to a data user interacting with SDL.
Capability Areas
Detailed documentation for each SDL capability area:
| Capability | Documentation |
|---|---|
Event Streaming |
Event Streaming — Real-time data streaming backbone |
Object Storage |
Object Storage — S3-compatible storage with lifecycle policies |
Federated SQL |
Federated SQL Engine — Distributed query across heterogeneous sources |
Virtual Knowledge Graph |
RDF & OBDA — Ontology-based data access and SPARQL queries |
Operational Monitoring |
Operational Monitoring — Platform health and performance metrics |
Data Science Notebooks |
Data Science Notebooks — Interactive analysis environment |
Data Pipelines |
Transformation Pipelines — Format conversion and enrichment |
Security & Governance |
Security — Policy engine, classification, access control |
Federation |
Federation — Cross-node data exchange and governance |
Development Workflows
API Development
-
RESTful API standards and patterns
-
gRPC service integration
-
WebSocket support for real-time data
-
API versioning and documentation
Support and Resources
-
User Documentation: [R]DP User Guide — End-user operational guidance
-
Examples: [R]DP Examples — Demonstration scripts and use cases
-
API Reference: Comprehensive API documentation for all services
Next Steps
-
Review Architecture — Understand the platform architecture and capability layers
-
Set Up Development Environment — Follow installation guides for your target environment
-
Explore Examples — Review practical implementations and demo scripts