Profile Picture

Andrew Knight

Global Data and Technology Lead, RICS

In part 1 of this series, we chart the rise of human-algorithm relations and automated valuation models and the iBuyer company in the built environment.

The last few years of the pandemic have seen the real estate sector rally around new digital and technological solutions to keep their doors open and businesses moving forward. iBuyers – real estate companies that purchase homes directly from sellers online – are one of several innovations that have gained increasing investor interest and growing success.

We examine the growth of algorithms, AI and automated valuation in real estate, and we examine promise and pitfalls associated with new and innovative uses of big data and algorithms. What follows is a cautionary tale about human-algorithm relations, and about why the computer is not the problem.

To err is human…

The Yale Book of Quotations gives the earliest citation of the phrase ‘To err is human, but to really foul things up you need a computer’ as 3 October 1969. Despite the relatively limited impact of computers on daily lives at that time, computers had already allowed humanity to put a man on the moon. The ability of computers to process data at scale also highlighted the potential for harm at scale.

More than a half-century later, there is ubiquitous use of big data and algorithms. It feels of critical importance to revisit this quote, to re-examine its central tenet, and to discuss the relationship between humans and algorithms.

This discussion, presented in two parts, will focus on a real-world case that directly affects the built environment. The issues discussed raise profound questions around how we design, develop, operate, manage, protect, and govern the data and technology used to make decisions by firms and governments. These decisions affect us across so many facets of our personal and working lives.

“Zillow’s shared value dropped by two-thirds within 2021, while losing tens of billions of dollars in valuation”

Algorithms: a brief history of problem solving

So, what exactly is an algorithm? The Oxford Languages definition gives us the following meaning: ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.’ The earliest evidence of algorithms is found in the Babylonian mathematics of Mesopotamia, modern day Iraq. We get the name algorithm from the name of a ninth century Persian mathematician who wrote the book, Al-jabr, from which we get the term algebra.

The concept of algorithms was deeply embedded in our approach to problem solving centuries before the recent advent of computers. It is only natural that they have shaped the way we use the data and computational power at our disposal. Today we use this power to solve problems and predict the future behaviour of complex systems such as weather, economies, and the value of property.

A plethora of terms are used to describe the application of ever more complex algorithms to provide artificial intelligence (AI) type capabilities. For the purpose of this discussion, the term ‘algorithm’ encompasses the full range of applications; from the simplest computer programs to regression analysis, machine learning and finally, deep learning methodologies such as neural networks. These approaches vary in their sophistication, complexity, power, and crucially, in their ability to be fully understood and explained, even by those that create them. However, they all share the same need to be deployed and governed by humans.

Automated valuation models in the built environment

Across the built and natural environment, the concept of valuation for purposes such as investment, secured lending, financial reporting, compensation, and taxation is well established. The valuation process itself, whether computerised or not, can be seen as a prime example of the application of algorithms based on various data sets. These data sets include market transactions, yields, discount rates, property attributes, occupier characteristics, construction and renovation costs, and many other location-based factors.

Automated Valuation Models (AVMs) for residential property are already well established in many markets, such as the US and UK. Until recently, their use has been predominately for secured mortgage lending, marketing, portfolio valuation by banks and other capital market participants, and for central and local government taxation on property.

The rise of the iBuyer

A more recent application of AVMs has been the emergence of the iBuyer business model, where a firm offers a quick cash purchase of a property. The iBuyer takes on the burden of preparing, marketing, and reselling the property. They make a margin on the transaction by either charging a fee to the seller or by purchasing at a discount to the value it can achieve on resale. Many firms are now operating iBuyer models in the US and UK.

One iBuyer firm, Zillow, is one of the big publicly traded real estate marketplaces in the US. It went public in 2011, entered the iBuyer market in 2018, and quickly grew this line of business. However, fast-forward to November 2021, Zillow was shutting down its iBuyer operation, and cutting 25 percent of its workforce (around 2000 people).

Zillow’s shared value dropped by two-thirds within 2021, while losing tens of billions of dollars in valuation. It was expected to make a loss of around $500 million in the second half of 2021. Its story represents our cautionary tale and the positive lessons we can take from its demise.

Coverage in the public domain highlights multiple factors led to the financial losses incurred by Zillow’s iBuyer operations and their consequent actions in shutting down the business unit. Their use of Artificial Intelligence to empower their house buying, selling and renovation decisions present a great case study. This case study highlights the need to understand the limitations of AI, to generate trustworthy AI, and to keep thorough checks on AI systems after they are deployed in the real-world.