Buyer Guides · 11 min read

Embedded Software Testing: A Practical Guide

Testing strategies for firmware and embedded systems where traditional approaches don't apply. HIL testing, simulation, static analysis, and safety-critical testing explained.

Why embedded testing is different

Embedded systems break most of the assumptions that normal software testing relies on. There's often no screen, no keyboard, no debugger you can just attach. The code runs on specific hardware that behaves differently to your development machine. Updates after deployment may be expensive, difficult, or flat-out impossible. And when bugs escape into production, the consequences aren't just a crash. They can cause physical damage or safety incidents.

The specific challenges:

  • Hardware coupling. Code talks directly to sensors, actuators, communication buses. Testing requires either the real hardware or a convincing simulation of it.
  • Resource constraints. Limited memory and CPU means heavyweight test frameworks won't fit. Tests either need to be lean or run on a separate system.
  • Real-time requirements. You need to verify not just that the code produces the right answer, but that it does so within the time window.
  • Limited observability. Getting information out of the system is non-trivial. No console.log(). Sometimes your only debugging tool is an LED and an oscilloscope.
  • Hard to update. Devices in the field may be impossible to patch. That makes pre-deployment testing critical in ways that web development rarely is.

Testing levels

Unit testing

Test individual functions in isolation, typically on your development PC rather than the target hardware. The trick is abstracting hardware dependencies so pure logic can be tested without physical devices.

  • Use Hardware Abstraction Layers (HAL) to separate hardware access from business logic
  • Mock hardware interfaces: feed fake sensor readings, capture actuator commands
  • Run on your development PC with frameworks like Unity, Google Test, or CppUTest
  • Fast feedback loop: tests complete in milliseconds, not minutes

Integration testing

Tests how software components work together. You might still mock the hardware, but the focus is on real interactions between modules: does the sensor processing module correctly feed the control algorithm, which correctly drives the output stage?

Hardware-in-the-loop (HIL) testing

This is where things get interesting. The real firmware runs on real target hardware, but the external world is simulated. The device under test connects to a simulator that pretends to be the sensors, actuators, and communication partners.

  • Validates actual hardware/software integration, catching timing issues, register configuration bugs, interrupt handling problems
  • Tests real-time behaviour on the actual processor
  • Can simulate conditions that are impossible or dangerous to reproduce physically: sensor failures, extreme temperatures, power brownouts
  • Expensive to set up, but enables automated regression testing that would be impractical by hand

System testing

The complete system tested in a real or realistic environment. For a motor controller, that means an actual motor. For a weather station, that means a temperature chamber. For an industrial safety system, that means simulated fault conditions with real response verification.

Testing techniques

Simulation

Software simulators model the target processor and its peripherals. QEMU handles some architectures well; most chip vendors provide their own simulators; or you build a custom simulation environment that models your specific hardware.

Simulation advantages: Test before hardware arrives. Run faster than real-time. Inject faults and edge cases trivially. Parallelise test execution across multiple virtual devices. CI/CD friendly.

Static analysis

Analyses code without executing it. Finds bugs that testing might miss entirely: uninitialised variables, buffer overflows, null pointer dereferences, unreachable code. For safety-critical systems, static analysis isn't optional.

  • MISRA C compliance checking for automotive and safety-critical code
  • Coding standard enforcement
  • Data flow analysis: tracking how values propagate through the code
  • Complexity metrics: identifying functions that are too complex to test thoroughly

Coverage analysis

Measures which code paths your tests actually exercise. For safety-critical systems, standards like DO-178C and IEC 62304 may require 100% statement and branch coverage. Even for non-critical systems, coverage analysis reveals the code you haven't tested, and untested code is where bugs hide.

Fault injection

Deliberately causing failures to verify error handling. Simulate sensor failures, corrupt communication packets, interrupt power during flash writes. This is how you find out whether your error handling actually works, rather than assuming it does because the happy path tests pass.

Safety-critical testing

Medical devices, automotive ECUs, aerospace flight systems, and industrial safety controllers don't just need thorough testing. They need documented, traceable, auditable testing that meets specific regulatory standards.

Requirements traceability

Every requirement traces forward to test cases that verify it. Every test case traces back to requirements it validates. If there's a gap in either direction, something isn't covered. Regulators check this.

Coverage requirements

Standards define minimum coverage levels depending on the safety classification:

  • Statement coverage: every line of code executed at least once
  • Branch coverage: every branch (if/else) taken both ways
  • MC/DC (Modified Condition/Decision Coverage): every condition independently shown to affect the outcome. Required for the highest safety levels in aerospace (DO-178C Level A)

Documentation

Test plans, test procedures, test results, and traceability matrices need to be documented and audit-ready. Good automation generates this documentation as a byproduct of the test process. You shouldn't be writing test reports by hand.

Test automation

Manual testing doesn't scale. With embedded systems that may run for years in the field, you need automated regression testing that catches new bugs every time the code changes.

Continuous integration

  • Run unit tests on every commit. No excuses.
  • Cross-compile for all target platforms automatically
  • Run static analysis on every change
  • Execute simulation-based integration tests nightly

HIL test automation

Automated HIL rigs can run hundreds of test cases without human intervention. The upfront investment is significant: dedicated hardware, custom test fixtures, and test scripting infrastructure. But for any product that will be maintained over years, the payback is clear.

Cost trade-off: A HIL test system can cost tens of thousands (or more). But manual testing of complex embedded systems costs more in engineering time, slower release cycles, and escaped bugs that reach customers.

Practical tips

  1. Design for testability from day one. Separate hardware access from logic. Use dependency injection. Make components testable in isolation. Bolting this on later is painful and expensive.
  2. Test on host first. Run as many tests as possible on your development PC. The feedback loop is orders of magnitude faster than flashing firmware and running on target.
  3. Use real hardware for what matters. Timing, interrupts, DMA, peripheral interactions need real hardware testing. The simulator won't catch everything.
  4. Automate early. Set up CI before the codebase grows. Adding automation to a large, untested codebase is much harder than growing it alongside the code.
  5. Log extensively. When bugs surface in deployed devices, logs are often your only debugging tool. Design logging infrastructure that works within your resource constraints.
  6. Test the failure modes. What happens when a sensor returns garbage? When power drops during a flash write? When the communication bus locks up? These are the bugs that matter most.

Frequently asked questions

Do I really need HIL testing?

Depends on the product. For a simple sensor node, probably not. Simulation and on-target manual testing may be sufficient. For anything with safety implications, complex real-time behaviour, or a long production life, HIL testing pays for itself quickly.

Which unit test framework should I use for C?

Unity is lightweight and widely used in the embedded community. CppUTest works well if your team is comfortable with C++. Google Test is powerful but heavier. For most embedded projects, Unity is a solid default choice.

How do I test real-time behaviour?

Unit tests on a PC won't catch timing issues because the execution model is completely different. You need either HIL testing on real hardware or a cycle-accurate simulator. Instrument the code with timestamping, use a logic analyser or oscilloscope to verify timing at the hardware level, and test under realistic load conditions.

Key takeaways

  • Embedded testing requires a layered approach: unit tests on PC, integration tests with mocked hardware, system tests on real devices.
  • Hardware-in-the-loop (HIL) testing validates real hardware/software integration and can simulate conditions impossible to reproduce physically.
  • Static analysis catches bugs that runtime testing misses. Essential for safety-critical systems.
  • Design for testability from the start. Retrofitting test infrastructure into monolithic firmware is painful.

Ready to discuss your project?

Tell us what you're working on. We'll come back with a practical recommendation and clear next steps.