What risks are associated with using AI in construction? In the final of this three-part series, four industry experts explain the steps industry can take to manage AI-related risks while capitalising on its benefits.
Anil Sawhney FRICS, Head of Sustainability, RICS
Mike Hill, Chief Digital Information Officer, RICS
Jugal Makwana, Senior Executive - Industry Transformation and Strategic/Open Ecosystem, Autodesk
James Garner FRICS, Global Head of Data, Insights and Analytics, Gleeds
Previous articles in this series illustrated how AI has the potential to significantly transform practices in the construction sector. This article outlines several risks that using AI poses, both currently and in the future, and discusses ways for the industry to capitalise on the benefits while managing these risks.
Construction projects generate vast data from design models, drawings, reports, estimates, project schedules, procurement records and sensor feeds. Yet, much of this data remains siloed, untapped for decision-making, earning it the label ‘dark data’. With the advent of AI tools, the construction sector has a transformative opportunity to extract valuable insights, patterns, trends and actionable information from these large datasets, even when almost 90% of the data is unstructured (e.g. emails, PDFs, images and videos).
It is important that structured data (e.g. models and databases), which adheres to established standards and practices, is still be prioritised. The industry has long been criticised for its inconsistent adoption of data standards and information management (building information modelling) practices.
A strong connection between structured and unstructured data is essential for AI tools to move beyond mere add-ons to seamlessly integrate into design, construction, commissioning and post-construction workflows. Adhering to international open data standards (such as Industry Foundation Classes) and construction information classification systems (such as Uniclass) remains critical to avoiding the risk of fragmented workflows and using unvalidated information. Using unstructured data also increases the risk of using wrong information, thereby leading to potentially incorrect outcomes. This adherence to data standards can be achieved by leveraging well-defined data schemas and embracing the idea of a digital thread. A digital thread ensures coherence and continuity of information across the project and asset life cycle, enabling AI-driven solutions to deliver maximum value while maintaining data integrity and consistency.
AI poses ethical risks regarding transparency and decision-making (understanding how AI reached a conclusion). An IBM adage reminds us, ‘a computer can never be held accountable; therefore, a computer must never make a management decision’.
As more tasks are outsourced to AI, it can become increasingly challenging to determine who is responsible for its shortcomings, including cases of trained-in bias and much-talked-about hallucinations. The trend toward AI-enabled automation may displace some job roles; in response, the industry should strive to retain human workers through reskilling programmes and alternative roles.
No matter how advanced it becomes, AI must always be treated as a tool used at the discretion of an individual’s judgement, not as a substitute for their judgement and expertise. Yottabytes of data can create a powerful foundational model, but this alone cannot lead to prudent decision-making – human experience, judgement and expertise are still needed. The age-old mantra of questioning output, verifying facts and testing for biases still holds good.
Currently, procurement practices in the construction industry are highly structured around documents rather than data. Apart from the challenges in procuring ‘validated’ data, there remains the problem of determining responsibility for AI’s errors.
Until principles that help build trust in AI and robust protocols are developed for assigning legal responsibility, all these parties (and more) will be reluctant to take on the legal risks of using AI. RICS is currently developing a new professional standard on the responsible use of AI.
The ethical issue of AI bias (e.g. misidentified photos due to training or algorithmic bias) also opens construction firms to liability. This is another reason why mainstreaming AI in construction will be challenging until widespread standards are established on how responsibility should be determined when AI fails.
Currently, AI tools such as large language model (LLMs) are often available at little to no cost. However, this affordability may not last.
Almost all technology providers and numerous startups in the construction industry are investing in developing advanced tools and integrating them into industry practices. The costs associated with their development and operation may rise significantly, especially for top-tier foundational models. Expenses related to hardware, data centre infrastructure, energy consumption, cooling and training LLMs are expected to change over time. In the future, subscription prices for services like OpenAI could climb substantially, rumoured to reach as high as $2,000 per month for their most powerful model.
This poses a critical risk: reliance on inexpensive AI could leave firms unprepared when these tools become costly or scarce. Companies caught in this trap may revert to non-AI methods, disrupting operations and eroding competitive advantage.
To mitigate this, organisations can:
‘Keep your data in-house’ is a common concept across industries that is often repeated, especially in the new AI era. Before most of us had even heard of generative AI (GenAI), several organisations were already scraping the web and other data sources to train their models. When these models were launched, concerns about the confidentiality of organisational data shared with them emerged almost immediately, fuelling scepticism and mistrust. Recent litigation on this issue has not helped.
Companies developing their AI strategy should not ignore this issue, while it is most secure to keep data in locally accessible machines, this does not meet today’s business requirements.
There are some encouraging developments in this area. For example, federated learning is an emerging technique that allows AI models to be trained across decentralised data sources, enabling companies to keep their data local while benefiting from AI’s collective intelligence. Data encryption and privacy-preserving AI are also gaining traction, allowing businesses to deploy AI solutions without compromising data security.
Until this issue is resolved, it will be difficult for firms to use AI fully without worrying about compromising the security of their data. Intellectual property will need to be safeguarded through transparency and trust principles for AI.
The construction sector has spent decades developing generally accepted standards and good practices and understanding out how AI fits into these standards and practices will take a lot of work. Existing standards may tell us to double-check especially risky decisions; AI might push us in the other direction to automate as much as possible for the sake of efficiency.
Ultimately, some balance must be found between the twin goals of safety and efficiency. Existing standards and legacy systems may not be compatible with specific AI tools; the question will be whether the standards or the tools need to be changed.
To evaluate AI's effectiveness and efficiency, users will need to develop widely accepted key performance indicators (KPIs) to track AI benefits. These could include metrics like project delivery time, resource optimisation, error reduction, and safety improvements. Firms could rely on AI-enabled data analytics and employee feedback to determine whether AI is being used as intelligently as possible. By not focusing on delivering value, there is a risk that use of AI may not contribute to the bottom-line.
The rapid release of new AI models, disrupting approaches to model development and training (such as those pioneered by DeepSeek), and the emergence of Large Concept Models highlight the speed of innovation. However, this technological progress is outstripping our shared understanding of how these tools should be applied. As a result, firms must balance enthusiasm for AI’s potential with the need for responsible and informed adoption.
High-level principles need to be developed of a data ecosystem that is fit for purpose and builds on existing data standards and information management practices as an industry (Figure 1). At its core, the ecosystem integrates critical dimensions such as time, cost, quality, safety and sustainability with contextual elements like people, environments, organisations, and regulatory frameworks. It emphasises achieving meaningful social, environmental and economic outcomes through organised, interoperable, secure and trusted data. Key features like a single source of truth, a digital thread of information and seamless connectivity across horizontal, vertical and longitudinal axes ensure the ecosystem's effectiveness in supporting AI-driven insights and decision-making for construction projects and assets. Critically, such a data ecosystem must be designed to free data from organisational silos and be made available to train and operate AI models. This will be a significant challenge. However, the opportunities are significant, with the ability to have a profound sector-wide impact.
If implemented with careful consideration of ethical, legal and security issues, AI can help us enhance our ingenuity and augment our intelligence (see Gen AI in corporate functions: Looking beyond efficiency gains by McKinsey & Company). By doing so, the construction sector can harness AI's full potential and tackle wicked problems to drive sustainable and resilient growth.