AI is everywhere in AEC discourse, but most advice is generic: read a whitepaper, attend a webinar, or buy a tool. That won’t move the needle. Real change comes from trying things on your own projects. At a session I recently presented at, the refrain was the same: stop chasing hype and start solving your own specific problems. This post lays out a practical, operational approach for implementing practical AI in AEC—a guide for firms ready to apply AI for architecture, engineering, and construction workflows.
People · Process · Technology—but Don’t Start With the Tech
New tools are seductive. They promise instant efficiency, better deliverables, and fewer headaches. The trap is buying a shiny tool without knowing what issue you’re solving. The correct order is simple: people → process → technology.
Capture the way your team works before you change it. Map the workflow, talk to the people doing the work, and find the bottlenecks. Often the highest-impact fixes are process tweaks or small role changes—not enormous software purchases. And when you do pick a tool, do it to support a tested process and the people who must use it, not to force-fit your operations to a product.
Make Your Tribal Knowledge Repeatable
Your firm’s competitive advantage is the expertise that lives in people’s heads. If a senior engineer can answer a client’s master plan question from intuition, that’s valuable—and fragile. Turn that knowledge into something reproducible. Speaking from my own experience:
Example 1: A parking structure restoration firm had one senior engineer who alone produced realistic life-cycle repair or replace estimates to help inform client decisions. We interviewed him, mapped what inputs he used, and built a simple web-based calculator (no-code) that encoded his rules. He didn’t lose control—he became the reviewer instead of the bottleneck. Clients got answers faster, and the firm scaled its capacity to answer these types of client questions.
Example 2: A business development team had no standardized prospecting playbook. We captured the steps top reps used, mapped a repeatable sequence, and automated the tedious data-gathering portion with off-the-shelf data tools. The reps still qualified leads manually, but the heavy lifting was automated, so new hires started with a usable pipeline on Day 1.
Both wins share a pattern: capture what works, map it, then automate the repetitive parts. Generic AI won’t know your niche; your job is to teach it using your data and your people’s rules. These are real-world AI use cases in AEC—not theory, but repeatable wins.
Embrace the “Jagged Frontier”—Get Your Hands Dirty With AI
AI’s abilities are uneven. It will do some hard tasks brilliantly and some simple ones badly. That edge—the jagged frontier—is where learning happens. The quickest way to learn what AI can do for you is to use it on real tasks.
Here’s a practical example to get started. Commit to 10 hours over the next 2–4 weeks and run this experiment.
10-Hour Practical AI Experiment (Repeatable)
- Pick 1 small, real problem.
- Map the current process in 10–15 minutes: who does what, what are inputs/outputs, and where are the bottlenecks?
- Assign a champion. Give one curious person ownership of the experiment and 2–3 hours/week to play.
- Prototype with simple tools. Use an LLM for text tasks, use workflow integration software for simple automations, and use low-code tools for simple web-based prototypes. Don’t over-engineer the prototype—you want quick feedback.
- Validate quickly. Compare AI output to a human baseline on a few real examples. If it helps enough, iterate. If it’s garbage, shelve or constrain it and learn why.
Two weeks of focused, hands-on tinkering will teach you more about where AI automation in engineering belongs in your firm than a year of passive reading.
Practical Guardrails
Don’t automate broken processes. Automating a bad workflow only speeds the failure.
Treat practical AI in AEC as an assistant, not an author. Always validate critical outputs with human judgment.
Start small and iterate. Small wins build credibility and momentum; large programs without early wins die on the vine.
Keep ownership visible. Champions, reviewers, and SOPs matter. Who will maintain the tool next year?
Closing: Do the Work
AI isn’t a magic wand. It’s a tool that can multiply the value of good process and deep expertise—but only if you put it to work on the right problems. Block the time, pick the small experiments, and empower champions. The firms that roll up their sleeves and learn by doing will lead this jagged frontier. The rest will wait for a vendor roadmap and keep getting generic advice.
About the Author: Nick Heim, P.E.
Nick’s interests lie at the intersection between the built world and technology, and he can be found looking for the ever-changing answer to the question, “How can we do this better?” Nick can be found on LinkedIn, producing content about the use of technologies in his civil engineering career and small business.