Test Platform
SDVA test platform: Runtime configuration, low-latency switching, remote usage.
The OSxCAR SDV Test Bench is a scalable, runtime-configurable test platform for Software-Defined Vehicles: More efficient validation, flexible architectures, one test bench for diverse deployment scenarios. Modular hardware (t.RECS), physical-layer switching, remote access, and AI-assisted optimization.
Runtime Reconfiguration
Software-defined topologies β switch scenarios flexibly, no hardware modifications.
Modular Hardware
t.RECS with standard form factors (SMARC, COM-HPC) β x86, ARM, RISC-V, GPU/FPGA-ready.
Flexible Architectures
Legacy domains, zone controllers, central computers β all E/E topologies in one test bench.
HIL/SIL + Shadow Mode
Hardware- and Software-in-the-Loop, A/B testing parallel to production.
- More Efficient Validation: Significantly reduce test cycles via software reconfiguration instead of hardware rebuilds
- Cost Potential: One test bench for various vehicle architectures instead of separate hardware duplicates
- Scalability: Platform grows with SDV requirements (L2+ to L3+ autonomy)
- AI Integration: Collect realistic latency data for GNN training
Software-Defined Vehicles require flexible test environments for different vehicle architectures (from legacy domains to central computers, L2+ to L3 autonomy). Traditional setups are hard-wired β hardware rebuilds are time-consuming. The OSxCAR test bench solves this through Physical-Layer Switching: Reconfigure topologies dynamically, no cable swapping.
- Hardware Duplicates: Each partner their own infrastructure β costs, COβ
- Slow: Rebuilds take a long time, delaying validation
- Data Silos: No central collection for AI training
- Software-defined: Fast reconfiguration, no hardware changes
- Agile: Switch scenarios quickly instead of long rebuilds
- AI-ready: Central data collection for GNN training
OSxCAR Integration: Remote access enables location-independent testing β validate Wasm modules, train AI models, without target hardware on-site. Central test bench reduces hardware duplicates and carbon footprint.
The test bench combines Physical-Layer Switching (switching matrix) with RECS Microservers and heterogeneous compute nodes. The core principle: Switch signals physically rather than route virtually β minimal latency, realistic jitter characteristics.
- Switching Matrix: Physical-Layer Switching β switch signals directly, scalable from 8Γ8 to 64Γ64 ports, bus-agnostic (Ethernet, CAN, LIN)
- RECS Microservers: Thermally optimized t.RECS modules β expandable with GPU/FPGA for AI workloads
- Compute Nodes: x86 (high-performance), ARM (power-efficient), RISC-V (open-source ISA), FPGA/ASIC (custom accelerators)
- Metrics: Hardware timestamps, Β΅s jitter analysis, Prometheus/Grafana telemetry, ELK logging
Architecture Support: The test bench supports decentralized control units (legacy), regional zone controllers, and highly integrated central computers (L3+ autonomy).
The switching matrix connects ECUs, sensors, and actuators at the physical layer β runtime-configurable topologies without hardware rebuilds. Latency-/jitter-sensitive paths are characterized (critical for ISO 26262).
The test bench supports all relevant automotive bus systems: From low-speed LIN via robust CAN networks to deterministic TSN. Each bus system can be flexibly switched via the switching matrix. Standard: IEEE 802.1 TSN Speed: 1 Mbit/s (CAN), 8 Mbit/s (FD) Speed: Up to 20 kbit/s Latency Measurement: Hardware timestampsEthernet / TSN
Speed: 100M / 1G / 10G
QoS: Time-Aware Shaper
Use Case: Zone backbone, ADASCAN / CAN-FD
Robustness: Differential, fault-tolerant
Payload: 8 Byte (CAN), 64 Byte (FD)
Use Case: Powertrain, chassisLIN
Topology: Single-master
Cost: Very low
Use Case: Comfort, sensorsMetrics
Jitter Analysis: Β΅s resolution
Telemetry: Prometheus, Grafana
Logging: ELK stack
TSN Integration: The test bench uses TSN for deterministic latency guarantees β critical for ADAS and vehicle dynamics control. Time-Aware Shaper (TAS) and Per-Stream Filtering (PSFP) are configured and validated in the bench.
CAN-FD Advantages: Higher data rate than standard CAN and larger payloads reduce bus load.
Software-defined topologies are the core of the SDVA test bench: Scenarios are defined via configuration files and loaded onto the switching matrix at runtime. No hardware modifications, no cable swapping β just software updates.
Use Cases for Reconfiguration:
- L2+ β L3 Migration: Add central computer, reconfigure zone controllers
- Gateway Tests: Test CANβEthernet gateway in different topologies
- Failover Scenarios: Simulate ECU failure, activate redundancy paths
- TSN Configurations: Test different QoS profiles (VLAN, priorities)
Example Workflow: (1) Define topology (nodes, links, bus types), (2) Generate switching matrix config (automatic), (3) Deploy config to bench (REST API), (4) Trigger switchover, (5) Validation (latency measurement, connectivity check).
The bench logs all switchovers (timestamp, config hash, user) β important for TISAX audits and reproducibility. Configs are versioned (Git) and signed.
The SDVA test bench is cloud-accessible: Partners reserve time slots, deploy software remotely, and access measurement data β without being on-site. Multi-tenant architecture guarantees data isolation according to TISAX. Time Slots: Reservation system (calendar-based) TISAX-compliantπ― Management
Multi-Tenant: Parallel usage, isolated data spaces
Deployment: Software upload via REST API
Monitoring: Live telemetry (Grafana dashboards)π Security (planned, currently optional)
Data-isolated
Encrypted
Audit trail
Reservation System: Partners book time slots (e.g., 2 hours) via web interface. During the slot, they have exclusive access to configurable resources (switching matrix, RECS nodes). Shared resources (e.g., central logging infrastructure) remain multi-tenant.
Software Deployment: Wasm modules are uploaded via REST API, signed, and deployed to RECS nodes. Fast rollback possible. Native binaries (x86/ARM) also supported, but Wasm preferred (portability, security).
Measurement paths are essential for bench validation and AI training: Latency, jitter, throughput are captured at Β΅s level. Hardware timestamps (FPGA-based) eliminate software overhead.
Telemetry Stack:
- Prometheus: Metrics collection (CPU, memory, bus load, latency)
- Grafana: Live dashboards (time series, heatmaps)
- ELK Stack: Log aggregation (Elasticsearch, Logstash, Kibana)
- Jaeger: Distributed tracing (for Wasm modules)
Integration with AI: The bench collects realistic data for GNN training. Topology graphs (nodes=ECUs, edges=bus links) + latency measurements are exported (CSV, Parquet). GNN models learn latency predictions for different E/E architectures and optimize software placement. Validation in shadow mode.
Extended Test Functions:
- HIL (Hardware-in-the-Loop): Real ECUs with simulated sensors/actuators β real signals, controlled environment
- SIL (Software-in-the-Loop): Purely software-based validation before hardware availability
- A/B Shadow Mode Testing: Run new software version parallel to heuristic β log suggestions, don't apply. Validation without production risk
- Test Framework: Integrated result collection, visualization (Grafana), audit trail for ISO 26262
Wasm Integration: Wasm modules run identically on laptop, bench, and target hardware β reproducible tests. Deterministic environment (AoT) for latency characterization. Trace data shows interop overhead.
Why configurable test bench instead of fixed hardware setups?
Flexibility: Software-defined topologies allow fast scenario changes without hardware rebuilds (switching time <1 ms).
Cost Reduction: Central bench instead of hardware duplicates at each partner β measurable COβ savings.
Data Collection: Bench delivers realistic data for GNN training.
What's the difference between bench tests and simulation?
Bench: Uses real hardware (RECS, real bus systems), delivers realistic latency/jitter data.
Simulation (OMNeT++, Mininet): Purely software-based, scales better for many scenarios, but less realistic timings.
Best Practice: Simulation for exploration, bench for validation.
What latency requirements does the switching matrix meet?
Physical-Layer Switching: Goal is minimal latency and fast switching times, critical for TSN and real-time tests. Characterization before deployment planned.
Use Cases: ADAS, vehicle dynamics control, safety-critical paths.
How does remote access work TISAX-compliant?
Time Slot Reservation: Web interface, exclusive access during slot.
Multi-Tenant Isolation: Separate namespaces, VLANs, encrypted (TLS 1.3).
No Production Data: Only bench-generated data, pseudonymized test data allowed.
Audit Trail: All access logged (TISAX VDA ISA 6.0 Level 3).
Note: TISAX and security elements are planned but currently optional.
Which E/E architectures are supported?
Legacy Domains: Decentralized control units separated by function (powertrain, chassis, infotainment).
Zone Controllers: Regionally grouped controllers by vehicle areas (front, rear, left, right).
Central Computer: Highly integrated platforms (L3+ autonomy, all functions centralized).
Test Bench Advantage: All stages testable in one platform β validate migration scenarios.
Which RECS variants and form factors are supported?
t.RECS: Thermally optimized (x86), for high-performance (ADAS). Supports standard form factors:
SMARC: Ultra-low-power (ARM Cortex-A), ideal for sensor nodes.
COM-HPC: High-performance (x86, GPU-ready), for central computers.
COM-Express: Industrial standard, broad vendor support.
TSN-capable, various architectures (x86, ARM, RISC-V).
What's the difference between HIL, SIL, and Shadow Mode Testing?
HIL (Hardware-in-the-Loop): Real ECUs + simulated environment β real hardware validation.
SIL (Software-in-the-Loop): Purely software-based β validation before hardware availability.
Shadow Mode (A/B Testing): AI models parallel to production β log suggestions, don't apply. Validation without risk.
Use Case: SIL β HIL β Shadow Mode β Production (step-by-step validation).
How does the test bench integrate with Wasm and AI?
Wasm: Modules run identically on laptop, bench, target hardware β reproducible tests. Bench provides deterministic environment (AoT compilation) for latency characterization.
AI: Bench collects topology graphs + latency measurements β GNN training (latency prediction, placement). Validation in shadow mode.
More Technology Pages: SDV Platform Β· Artificial Intelligence Β· WebAssembly


