X
Contact Us
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
219 Design offers a 2-Day Workshop called Essential Modern Software Infrastructure for leaders looking for help setting up their software team for optimal success. Kelly Heller and I put together this handy 4-part guide on How to Build Your Software Infrastructure Team for people who want to do it themselves or get a sneak peek of what the workshop covers.
In Part 1, we shared a few core principles we live by here at 219 Design that help us deliver great code. Part 2 discussed source control—which keeps your entire team in agreement about the current state of your software. In Part 3, we discussed Automation and Containers, which help keep your team in agreement about the proper runtime and build-time environment(s) for your software. Part 4, the final section of our four part series, will focus on common practical concerns that we routinely deal with during our day-to-day lives as developers.
In this section, we will cover automated tests, regression tests, special considerations for firmware and hardware testing, and simulation. Having a good test suite lets you work faster. What time you spend writing and curating your test suite is recovered by reduced debugging time. It is an up-front investment that greatly reduces the total cost of development.
There are lots of words written about the different kinds of tests. There are many differing, passionately-debated definitions of “unit” test vs. “integration” test. We’ve found that the specific terminology and scope of the test doesn’t impact the value of the test. What matters most is that your tests are automated, fast, and run by CI.
Regardless of the specific type, tests need to be fast and automated. If tests are difficult to run, developers will be less inclined to run them regularly. Tests need to be fast enough to run without breaking your train of thought. Being able to easily pick a relevant subset of the test suite to run is vital, unless your test suite is fast enough to run in its entirety.
Having a single script that runs your test suite enables developers to run tests easily, and it also makes integrating the test suite into your continuous integration setup trivial. That allows your CI to run the entire test suite, including tests that take too long for developers to run regularly, on every commit. That isolates the commit that introduces a bug, making debugging almost trivial.
“Fool me once, shame on you; fool me twice, shame on me.”
At an implementation level, regression tests are usually indistinguishable from any other automated test. Conceptually, though, they play a different role in your test suite. Regression tests are written after a bug is found in production or testing.
Regression tests generally should be documented as something separate from other tests. They may live alongside other tests in the suite, but a simple comment with the date and a description of the bug (or a link to your bug tracker) are often enough to set the test apart as a regression test. Regression tests generally should not be pruned from the test suite as it grows.
Remember the red in “red; green; refactor”: make sure the test fails on any branch where your fix has not yet been applied. This ensures that the failing test is an accurate proxy for the bug.
At its core, firmware is not significantly different from other software. However, it is often written in C, which can make unit testing more complex than other languages. Some of our favorite C unit testing tools are:
Testing a hardware device can be even more complicated than testing pure software. There exists a continuum with different cost/benefit tradeoffs:
On the low-cost end of the spectrum, there is virtually nothing preventing you from adding unit tests straight away, especially once you are familiar with the tools listed in the prior section.
For the tradeoffs involved along the other end of the spectrum, see the excellent post CI for Embedded Systems, by James Munns.
Got both a Debug Build and a Release Build? Run tests on both! Some bugs may only show up on optimized release builds.
“Test what you ship, ship what you test.”
With any complex hardware/software device, being able to simulate the hardware lets your software team accelerate their development. Things to simulate:
The above are all specific cases of “finding seams”. Seams are places where you can alter behavior in your program without editing code in that place (From Working Effectively with Legacy Code).
Simulating your hardware frees your software developers from needing access to (probably scarce) hardware. It also lets you isolate the cause of many bugs to either the hardware or software:
This process sounds tedious, but we have consistently found it is faster than debugging the software and firmware together with the hardware in the loop—especially when the full system has long bring-up times.
In this section, we will discuss test-driven development, behavior-driven development, and process purity.
Write tests first. Ultimately, that’s all that test-driven development is.
“Red; green; refactor.” Write a test and ensure it fails (red). Write some code to make the test pass (green). Modify the code so that it is up to your standards (refactor).
Code is inherently testable if you write the tests first.
Writing the tests first enables (enforces?) abstraction in the design. You will often find you need access to some module that hasn’t been written yet. In that case, you have to introduce an abstraction and a fake implementation for the test. You can then fill in the real implementation later.
Beware:
Level of abstraction is key.
Good collection of tips (with links to Google Testing Blog)
Behavior driven development is the same guiding idea as test-driven development except the tests are higher-level and usually written in a DSL like gherkin. These high-level tests of business requirements are also called Acceptance Tests.
Don’t be dogmatic. You don’t need to sustain (or ever reach!) 100% test coverage. Some features are much harder to test, and generally not worth it – like GUIs and 3rd-party integrations. It can be good to have a single test that exercises such features, but full coverage tends to be brittle. Brittle tests are a time sink because your team must spend time debugging the test and updating it repeatedly.
In this section, we will cover logging, configuration, and dogfooding. These are recurring themes common to most software projects.
You really should have a reckoning with your logs every week or more (i.e. “Log craftsmanship” is iterative, like software craftsmanship). Know where your logs are. Have a plan for log rotation or archival. Consider keeping examples of “golden” (happy) logs, so when you receive a log that corresponds to a crash or a bug, you can compare the logs.
Constants that may need to be tweaked should go into a config file. This is critical during development, when these values are still getting determined and is not harmful for production. Of course, these configuration values may be critical to the correct functioning of your device. In that case, you can verify configuration in production by hashing the expected configuration and checking that hash at runtime before the software starts.
Use your own system if at all possible. Use small parts of the system in isolation if that is easier.
If you’ve been implementing the practices from parts 1-3, you’ve seen how automation can make your software development life much easier. Here in Part 4, we advocate for the remaining essential practice: good testing methodology. Combined, the practices shared throughout this series will keep your project moving efficiently and confidently toward the finish line rather than spiraling out of control. Please grab your favorite tip and implement it today. We’ll be waiting to toast your success at the finish line!
Keep Calm and Automate On. And call us if you get flummoxed along the way!
Date published: 05/19/2021