Digital Builder Ep 82: Evaluating the Risks of Using AI in Construction

AI is a powerful tool, but to quote Spider-Man's Uncle Ben, "With great power comes great responsibility."

Because AI is so compelling and (in some cases) easy to use, it's tempting to leverage it in many of our workflows. 

But as much as we love tech advancements, AI isn't a tool you can just unthinkingly run with. Using AI responsibly means being aware of its pitfalls and shortcomings. In AEC, we need to keep certain risks in mind—particularly when it comes to things like contractual obligations, data privacy, and safety. 

Here to discuss all that and more is May Winfield, Global Director of Commercial Legal and Digital Risk at Buro Happold. May's role focuses on risk management, and she offers a unique perspective on how to use AI properly and safely in construction. 

Watch the episode now

You can also listen to this episode on Apple Podcasts, Spotify, YouTube, and anywhere else you get your podcasts. 

On this episode

We discuss:

  • Identifying AI’s five biggest risk categories in AEC: reliance, confidentiality, copyright, personal data, and ethics
  • Why copyright and content attribution are such tricky issues with AI
  • The limitations of using in-house AI models trained on proprietary data
  • Why client requests are becoming more informed and specific (and why this is good!)

Separating hype from reality when it comes to AI

When it comes to AI, it's crucial to "separate the hype from the reality," says May. That's because while AI can be a powerful tool, it still has plenty of limitations, especially for those whose models rely on incomplete or inaccurate data. 

May remarks, "We hear a lot of words about data being the new currency, data being the new gold. The thing that people don't say is how much of the data in the construction industry is validated, accurate, and complete. If you have AI models, how accurate are they going to be? And what are you going to use them for?"

There's also a lot of talk about AI taking over people's jobs. May's take? AI won't replace people just yet. What it will do is augment and enhance what we do. 

"You can use AI to give you options for a particular design, and you can take one of those options and develop it yourself, as well as develop content and ideas for how you present something in a more rational way," she says. 

She adds that AI will be most beneficial for ideation and "making boring tasks easy and faster."

The biggest risks AEC teams must be aware of when using AI

AI isn't a magic solution. As mentioned earlier, artificial intelligence has limitations, and failing to consider them exposes you to risks. Here are some of the biggest risks teams need to be mindful of when using AI for work. 


According to May, over-relying on AI can lead to subpar work and oversights.

"A generative AI model is only as good as the data it's trained on. But what if you don't know what data it's been trained on? Again, is it complete? Is it accurate? Has it just been scraped from the internet?"

May warns against using AI’s output without verifying its recommendations. Let's not forget that we and our teams are still responsible for the deliverables we produce, even if AI helped us do it. 

"Saying to your client, 'Oh, the AI got it wrong,' is like saying the calculator got it wrong. It's not going to help, and I think that's a real risk," May comments.  

"It's more of a risk now, arguably with the fact that generative AI models can recognize photos and images. Someone may upload an image and say, 'Is this connection safe?' And the generative AI model says, 'Yes, based on the data I have, it is.' But what if it’s not, and something bad happens? The consequences could be serious."

Copyright infringement

Then there’s the issue of copyright infringement. This is a murky area because determining the originality and legality of AI-generated content can be complicated.

"Generative AI models scrape data across the internet or other sources. There are loads of cases at the moment, particularly in the US, where writers say ChatGPT has been fed all their novels and therefore is in breach of copyright because you can produce a novel exactly the same," May explains.  

"For instance, let’s say you ask for a particular thing and get an AI result. You don't know if it's in breach of copyright, but you use it in your project. Someone may then say, 'Hold on, that came originally from my work. You shouldn't have used it. I'm going to sue you and I’ll tell the client.'” 

"But also on the flip side, if you feed your matter in, and then someone uses it, that can prompt you to say, 'Well, you've breached my copyright.'"

Personal data and ethics 

May says we should be extremely careful with the data we feed AI solutions, particularly in publicly available tools. 

She likens it to "walking down the street and handing it over to someone."

"You can't delete it, and you can't remove it. If someone asks the right question, all that data could be extracted, and you lose confidentiality."

Beyond privacy concerns, sending information into the AI ether can also put you at risk for violating client agreements and obligations.

As May points out, "Most people's contracts will say, 'You will not reveal details of this project,' so you're also in breach of your contracts. Clients aren't going to be too pleased if they find out you have released all this confidential information."

Negligence and insurance

May also highlights concerns regarding professional indemnity insurance, which covers negligence but may not cover actions deemed reckless or knowingly incorrect.

"If you knowingly did something reckless and stupid, knowing it could be wrong, are insurers going to say, 'Well, why should we pay for you doing that?' So, the insurance industry will probably have to keep up to date with that as well."

How do we mitigate these risks?

Now, let's move on to risk mitigation. AI is here to stay, and yes, there are risks. But as with any tool, it's all about balancing caution and innovation.

The first step to doing that, says May, is to identify the technologies you need to use and why you should use them. 

Doing so will help you make informed decisions about which solutions to adopt, plus how (and how NOT) to use them.

From there, May recommends establishing internal guidance for those using AI. At Buro Happold for example, she says they have a document outlining AI dos, don'ts, and best practices. She emphasizes the importance of keeping this document simple, so everyone easily understands and follows it.

Lastly, May encourages folks to "trust your experts."

"With many of these things, it is very tempting to either make decisions or enforce them without saying to your technical team, 'What do you think? Let's spend our time going through all the possible options and then make an informed decision.'" 

"At Buro Happold, we have an AI working group, and I'm on it. We have our head of IT and our CTO, plus other people who come at it from different parts and give an opinion, and then you get a whole."

Looking ahead: what's next for AI and other technologies?

No episode of Digital Builder would be complete without discussing future trends. Looking at the horizon, May says she's excited to see how different technologies converge and intersect in the coming months and years. 

"I enjoy seeing the combination of technologies. So, for example, last week, I was in Hong Kong speaking at a blockchain conference. We were talking about how you can combine blockchain, smart contracts, digital twins, and AI. Everything improves as a result rather than sitting in silos."

She continues, "As part of that, I'm hoping we move away from contracts simply saying, 'Do some BIM, I want Revit, I want a digital twin.' The trend is for clients to be more informed and say, 'I want these five things; please deliver them to me.' And that means clients are going to be happier, and also the consultants and contractors are going to be able to deliver more accurately."

New podcast episode every week

Digital Builder is hosted by me, Eric Thomas. Remember, new episodes of Digital Builder go live every week. Listen to the Digital Builder Podcast on:

Eric Thomas

Eric is a Sr. Multimedia Content Marketing Manager at Autodesk and hosts the Digital Builder podcast. He has worked in the construction industry for over a decade at top ENR General Contractors and AEC technology companies. Eric has worked for Autodesk for nearly 5 years and joined the company via the PlanGrid acquisition. He has held numerous marketing roles at Autodesk including managing global industry research projects and other content marketing programs. Today Eric focuses on multimedia programs with an emphasis on video.