How should our sector manage the opportunities and risk associated with AI? Andrew Knight, RICS Data and Technology Lead, explores the rise of AI over the last decade and explores the role of professionals when working with this technology.

This article was originally published in the New Civil Engineer (access it here), and has been reproduced with their permission.

Andrew Knight. RICS. London. United Kingdom.

Andrew Knight

AI, data and tech lead, RICS

AI Impacts published a survey in August 2022 where typical respondents thought there was a 5% probability of advanced AI causing a very negative event, such as human extinction. Before we all stock up on tinned goods and pasta, let us examine the other 95% of outcomes where we are dealing with, one hopes, fare less extreme outcome, but implications none the less that need to be managed on a risk basis by applying a range of interventions and a sector that is educated, aware, and engaged with the opportunities and risks associated with AI.

The rise of AI over the last decade or so in its various flavours has been turbocharged over recent months with the emergence of tools such as ChatGPT.  So, in addition to complex statistical approaches such as regression analysis, supervised and unsupervised machine learning, and neural networks, we know need to add generative AI and large language models to our lexicon of terms to digest and understand.

AI in all its various forms and flavours is becoming increasingly pervasive across all sectors including the built and natural environment.  Some applications will be generic, and in many cases, quite prosaic and in relative terms, low risk, with use cases such as note taking, transcription, and workflow automation.  But with AI being used to power customer service chatbots and assist with the process of recruitment we can also see ourselves being exposed to higher risks and potentially damaging outcomes if bias and errors occur.

Our sector is already seeing AI being used for applications directly related to developing, building, and maintaining assets across their full lifecycle such as cost estimating, benchmarking, scheduling, asset management and the use of big data that can only be processed by the power of AI.

When we consider the different models and approaches that AI can take, we are faced with a fundamental issue that they range from so called white boxes – whose conclusions and outcomes can be well understood and documented – through to an increasingly opaque range of tools that can be described as black boxes – where the internal decisions and the data used to train and drive those decisions is hard if not impossible to explain.  So, we are faced with huge opportunities in the form of AI, varying degree of risks depending on its use, and in many cases, tools and models that are hard if not impossible to interpret or to explain their decision-making processes.

“Professional scepticism will remain an importance skill to employ to ensure we employ the right AI for the job and continue to provide reasoned advice to our clients and other stakeholders.”

As professionals, we should continue to exercise our professional judgement as to the applicability of a particular AI approach for a particular purpose.  We need to understand the risks inherent in each application of AI, ensure all affected stakeholders are aware that AI is being used, and where possible gain as much understanding as possible on the nature of the models being used and the quality and provenance of the data driving the models themselves.  Professional scepticism will remain an importance skill to employ to ensure we employ the right AI for the job and continue to provide reasoned advice to our clients and other stakeholders.  The data used by AI can be incomplete, wrong, out of date, and in some cases malicious, being provided to damage the performance of the models.  We must also understand the emerging legal and regulatory issues around patents, intellectual property, copyright, and privacy. that ChatGPT and other generative tools are now raising.

As domain experts in the build environment, we need to play a bigger role in the development, governance, operation, and calibration of AI.  Whilst we don’t need to become data scientist and start coding ourselves, we should learn how to work with data scientists, understand the fundamentals of the approaches and tools that they use, and be comfortable with the statistical language and terms that so many AI approaches use.  It is also importance to allow well developed and managed AI to produce decisions and outputs without human intervention, where such intervention could reintroduce human bias that has been removed through the application of AI.

Regulators and professional bodies will need to take a role in developing forward looking education, guidance, standards, and regulations that balance the risks against the opportunities of AI and emphasise the positive role that built environment professionals can play in the responsible use of AI.