Look inside your processor

raigvi/Shutterstock.com
Cedar tools logo

The key factor for proper testing and efficient debugging is observability – ideally without affecting the system.

This is exactly what we deliver:

CEDARtools®

is a novel dynamic analysis technology based on complex electronics and clever software, that creates live digital twin representations of monitored processors to perform live coverage measurements and runtime checks defined in a high-level language.

CEDARtools® is the first non-intrusive, continuous live observation technology for embedded systems. Observing release code without influencing it.

Our technology unlocks a wide range of new applications including

  • Measuring code coverage without instrumentation even on high functional test levels.
  • Measuring complex timing behavior without influencing the system.

How it works

CEDARtools® takes advantage of the trace and debug units (e.g. Arm® Coresight, Intel® Processor Trace, Infineon MCDS) embedded in almost all modern processors like Infineon AURIX, NXP QorIQ®, Intel Atom® and Arm® Cortex®. Their trace units expose the details of the operation of the CPU and its peripherals to the outside. However, they easily produce a few GBit per second of trace data, which are output via the traditional parallel trace port or the more modern and more reliable serial interfaces (Nexus Aurora, Arm® HSSTP, Infineon AGBT, PCIe).

The Technical Challenges

The live analysis of the execution trace at run time is a quantum leap enhancement over the offline analysis of recorded trace data as it effectively eliminates the bottlenecks imposed by the need for the intermediate buffering. However, there are two major technical challenges to be overcome:

  1. The highly compressed trace data stream must be decompressed and the control flow of the CPU(s) must be reconstructed. This demanding computation must often cope with the execution trace from multiple fast CPUs that are running at nominal clock frequencies above 1 GHz. This decoding may be further challenged by additional abstractions and indirections introduced by different operating systems, such as Linux (e.g. Zephyr), VxWorks®, RTEMS or FreeRTOS.
  2. The reconstructed control flow must be analyzed into an apt event stream abstraction that is suitable to drive the desired of various possible backend tasks. For example, (a) branch information for the coverage analysis may be recorded or (b) dynamic properties over the event stream may be computed and validated against a temporal logic specification.

The live analysis of trace data over arbitrarily long program runs enables (a) the measurement of the control flow coverage during the execution of integration tests and system tests, as well as (b) the dynamic constraints monitoring, which can be used to validate (b1) the correct coupling of the data and control flow as well as (b2) the runtime behavior of an application.

Use Case: Timing Verification

This example from the ADAS domain demonstrates the automatic, non-intrusive and continuous monitoring of a timing constraint, which checks continuously if the time between read_brake_sensor() and activate_brakes()is less than 5 ms. The release binary (no software instrumentation is required) is loaded into the processor (Infineon AURIX, NXP QorIQ®, Intel Atom® or Arm® Cortex®). During execution of the integration tests or system tests in a HIL environment (or in-field observation), the trace data stream is output by the processor via a parallel trace port or via a serial high-speed interface (Nexus Aurora, Arm® HSSTP, Infineon AGBT, PCIe). It synchronizes an FPGA-based digital twin representation of the relevant processes within the device under test (DUT).
When the predefined instructions read_brake_sensor()and activate_brakes() are executed, events are generated, which can then be examined on-the-fly for specified temporal properties. The event processing units used are programmable by the TeSSLa language and a large number of temporal properties can be monitored in parallel. The TeSSLa macro support also enables the easy adoption of industry standards like AUTOSAR TIMEX or AMALTHEA.
The CEDARtools® solution leverages event processing units that execute low-level TeSSLa operators natively. So, they are merely re-programmed for a given monitoring task. A time-consuming, application-specific synthesis of FPGA logic is not required. Thus, a change of the high-level property description can be applied to a trace data stream within seconds.
Alternatively, CEDARtools® hardware (CEDARbox) can also be configured to measure the structural coverage (statement coverage, branch coverage, MC/DC).