
At 219 Design, we are always looking for ways to bring more value to your team by harnessing our expertise and embracing the latest solutions. Our clients’ interest in AI ranges from “don’t use it at all” to “let’s go!!!!” We have been using Generative AI on internal efforts to better understand and evaluate the technology for appropriate client use. Although Generative AI can be a contentious topic due to valid concerns regarding confidentiality and hallucinations, I am optimistic that responsible and conscientious use of AI can mitigate those concerns. Here’s an example of how we improved project delivery times without compromising confidentiality, quality, or integrity.
Improve Project Delivery Times Without Compromising Confidentiality, Quality, or Integrity
Scope Public Data
We use a paid, privacy-preserving model, so confidentiality is not a driving concern here. The more interesting constraint is scope. I’ve found that limiting our queries to publicly available information produces better results and cleaner thinking. When you commit to working only with public data, the question you’re asking becomes more precise almost automatically. Public documentation, published datasheets, and open-source changelogs are also the kinds of sources that an LLM can reason over reliably. The discipline of scoping to public data is, in practice, the discipline of scoping well.
Narrow the Input
With that boundary in place, I’ve found that narrowing the input further is just as important as keeping it public. An LLM asked to reason over a single, authoritative document produces far more reliable results than one asked to synthesize information from across the open web. Google’s Notebook LM is well-suited to this approach. You load a specific set of documents, and the model grounds its answers in exactly that material. A targeted datasheet, an errata document, a library changelog: the more constrained the dataset, the more trustworthy the output. This is also what makes hallucinations tractable. When the model’s source material is a 400-page datasheet, a wrong answer is easy to spot and easy to verify.
Use an LLM
Of the many use cases where AI can be deployed, the most valuable in my day-to-day work can be summarized as a super-powered Ctrl+F. Finding just the right piece of information in a dense technical document is often the most time-consuming part of solving a problem. Instead of plugging away, feverishly building an ever-more-terrifying regex pattern, an LLM can be used to find more precise results to unlock the next stage of the problem. The value is in navigation, not origin.
Case Study: The STM32 Review
We were recently engaged to perform a safety review on a client’s hardware and firmware design. Our electrical and firmware teams approach this type of review in parallel, each working through their domain with independent rigor. As a first step, I loaded the publicly available datasheet, errata sheet, and STM32 CubeMX BSP configuration report into Notebook LM. The CubeMX report documents peripheral usage, configuration, and pin mapping. It is essentially a blueprint of the hardware initialization with no proprietary business logic exposed. My initial prompt was direct:
“Identify devices affected by errata in the current configuration. If a peripheral appears
unaffected, list it separately so I can manually review it.”
Our Methodology
The results were an excellent starting point – a structured list of areas requiring attention, grounded in the documents I had provided. This is not a conclusion; it is a well-organized index that our engineers can use to prioritize and direct their in-depth review.
What made the STM32 review even more useful was what happened in the back-and-forth that followed. I refined the initial query several times, narrowing the scope and changing the angle of the question. Each iteration surfaced something the previous question had missed. This is where the real value of conversational AI shows up for technical work. It gives you a low-stakes environment to stress-test your own thinking. Before committing any findings to the report, I used the same session to poke at our proposed approach, asking the model to identify edge cases or conditions we might have overlooked. Not because the model has a better domain knowledge than our engineers, but because articulating a technical argument precisely enough for a model to respond to it tends to surface gaps. It’s rubber duck debugging on-demand.
Human Led, Company Owned
None of this changes who owns the outcome. Every piece of information that came out of those Notebook LM sessions was verified by a human before it became part of the report. The model flagged a peripheral as potentially affected by errata, so we read the errata entry ourselves and confirmed it. It produced a list of configuration items to check, and our engineers checked them. Some items did not apply because the configuration bypassed the errata; others were affected, and mitigations were required. The confidentiality boundary holds from start to finish. I used publicly available documents, and the findings were reviewed and validated by the team. AI compressed the search phase; it did not replace the review phase.
The Outcome: Less Time Searching, More Time Thinking
The pattern I have landed on is straightforward: use AI to move faster toward something you can verify, not to generate something you have to take on faith. Treating an LLM as a search and synthesis layer over a bounded, authoritative datasheet addresses both the concerns that make generative AI contentious. Hallucinations become checkable, and confidentiality is preserved by design. The tool earns its place not by producing answers, but by helping me spend less time searching and more time thinking.
Get Started
At 219 Design, we see the full spectrum when it comes to AI adoption. Some teams prefer not to use it at all. Others are ready to go all in. Most fall somewhere in between.
Our approach? Meet you where you are.