We are currently facing a simultaneous barrage of attacks on a multitude of fronts from the financial, commodity, marine and technology sectors, and even possibly in the future, the political front. For those whose role is to defend the fort by ensuring stability and continuity in the status quo or those tasked with offering investment advice, their confidence in forecasts and bulwarks supporting institutions are bound to be shaken. During this period, many will be working overtime, running numerous stress tests, forecasts and control measures, in an attempt to get ahead of the change.
However, is that really possible? For instance, for banks, is it possible to run a fully recursive stress test model that incorporates both price declines and high delinquency rates while throwing in the scenario of deposit flight? For analysts and academia, is it possible to construct models, often regression based, to forecast the trends for the various sectors of the property market when the underlying relationship between the past and present could be upset (or even if not) by the sea change going on around us?
For a start, these questions are already momentous enough because to question the processes and methodologies adopted by conventionalism would mean going up against millions of labour hours of careful deliberation and documentation to arrive at the state of current practices.
Given the constraints of space, we shall break this discourse into two parts. For the first, we shall look at whether forecasting models work well in this environment or perhaps even under normal circumstances. We will also touch on the analytic zeitgeist of the moment — big data. For the second part, we shall muse on the topic of land scarcity in Singapore before finishing with negative interest rates.
Quantitative modelling
When it comes to forecasting or policy analysis of the real estate market here, among the various toolkits in a modelling bag, the more quantitatively inclined analysts would often fall back on regression methods. Easy to use (since productivity software that range from the simple add-on in Excel to the expensive statistical packages are a dime a dozen) and commonly taught in undergraduate classes, many apply this “standard” model approach to the real estate market. For many, its formulation has particular attraction for use in forecasting.
However, is the application of such methods an appropriate approach to structural analysis and the forecasting of the real estate market?
The answer requires some thinking and an allusion to the sciences. In a recent reading of a compilation of opinions contributed by luminaries in various fields such as physics, statistics and psychology, it was suggested that stumbling blocks to scientific discoveries can come from established ideas and, for progress to happen, these ideas must die. Even when the model has been refined to be as close as possible to theory (if any, for real estate), fit and test statistics and expost forecasting trials, it tends to miss the mark by a “mile” when put into practice.
It is problems like these that stir up thoughts of similar conundrums in the field of science, as that regression analysis only provides approximations of the construct at an observable scale level. In truth, the inability to even make accurate predictions time and again makes this forecasting approach much worse than any scientific model. At least for scientific models, even if we don’t know why a system behaves as it does, we are at least confident that in repeated experiments, if you have this input, then you get a certain output.
Not so with real estate (or other economic models). Relationships have been observed to change over time. If we are not confident about all the outputs, then how can we say that a certain percentage decline in real estate prices will not affect the economy by such and such degree?
We shall now look into why these “standard model” approaches for real estate are off the mark. Let us take the case of the net demand for office space. At the individual corporate level, what is ultimately the agreed-upon space to take up or forsake, and the time to move is a decision made by a finitely countable number of players (tenants) in the game. These players are in turn made up of a collection of decision makers within a company, interacting with the agent and landowner.
In some ways, aren’t these decision makers, agents and landowners — let’s call them actors — quite alike subatomic particles that regression analysis cannot measure? Each of these actors is bound by weak and strong hierarchical forces (ranks within a company and human-made regulations). To capture or predict the behaviour of these countably many actors — each with their own multi-dimensional considerations to think about — by using quantitative models is, to say the least, impractical. For if I were to propound one, it would be what Newton would call an idea “so great an absurdity that I believe no man who has in philosophical matters a competent faculty of thinking can ever fall into”.
Therefore, at an observable scale, a regression-based approach (or any other commonly used methods) would provide only a rough — and even then highly imprecise — model of our real estate market from the start to the end-date of the historical data period. The actual outcome in each period is a result of the summation of actions of a countably finite set of actors interacting to agree or disagree to some course of action(s) after considering multi-dimensional issues — for example, what the lease contract permits, what the regional CEO will say when he sees the place, and whether it is easy to hire staff in this country. When it comes to the time of execution, the implementation is also probabilistic in nature. In the backdrop, there are government regulations that act like a field that either adds resistance or assistance to the actors’ course of action(s).
For real estate analysts, resorting to a macro model may not provide a better understanding of the market
An illustration
Drilling down to an illustration of a consultancy company serving the oil and gas sector, the decision has been made by the global CEO to pare costs. The regional head then decides to relocate their office to a lower-cost city but the in-house real estate executive tells him that he has to consider the drag potential from their office lease if they can’t find a replacement tenant as it still has two years left to expiry. Each of these actors has their own domain of interest and may sometimes act in an irrational manner because of imperfect information flow or their actions could be delayed due to the process of getting clearance from compliance bodies, where the outcome is also probabilistic. The process of extricating themselves from their existing premises will therefore involve hierarchical sets of actors, each acting not in a deterministic fashion but in a discrete probabilistic manner. On the landowner’s side, the decision process will be probabilistic, too. The actors will be the marketing agents who, in turn, transmit information up and down the hierarchy with their own sets of processes.
It will be a futile attempt to come up with a general standard model to describe all the actors in all situations. How then do we tie that up with the macro regression-based models? The answer is not to try to do so. Never the twain shall meet. There is no fundamental theory of everything for real estate, nor even any robust theory at the micro level. (At the sub-macro locational level, one can still explain the interplay of spatial and qualitative attributes using hedonic regressions on how relevant attributes affect the behaviour of certain variables, but there is no fixed specification. For example, in some instances, the factor of walking distance to the nearest MRT station may only be significant at certain places or at certain periods of analysis, but not necessarily across the board.)
All this may sound like hyperbole, but the long and short of this is that for real estate analysts, resorting to a macro model may not provide a better understanding of the market — structural analysis at best gives a blurred image of the market or, worse, for forecasting purposes, could be off target. And this could be the reason that many forecasts — particularly for a heavily intervened residential market like ours — go awry. If real estate analyses are stuck somewhere between the pre-Newtonian to, at best, the Newtonian age, what should analysts do then?
Well, besides having a quantitative handrail, analysts need to drill down to the micro level to understand the thought processes of both tenants and landowners, planning guidelines, legal issues that can create friction to a relocation or forced selling, cross linkages with banking covenants, banking risk-control measures and so on.
As it is, there is often a sense of detachment between those who face clients and those who are confined to analysis. If the former does not have the bandwidth to cross over to understand the latter’s position, then at the very least the latter should make that attempt to cross that bridge. Otherwise, cycle after cycle, the group of analysts using numerical analysis to proffer forecasts and policies for the real estate market are committed to simply forcing supply and demand statistics to feed into quantitative models, forgetting that their predecessors two decades ago — and perhaps by now are either retired or structurally unemployed — had got their model answers wrong for the wrong reasons, wrong for the right reasons, and right for the wrong reasons.
Magic in big data
Big data is without doubt the buzzword of the moment. Promoters of this, likely to be academia and software providers, say big data analytics can uncover patterns, hidden correlations, market trends, customer preferences and other useful business information. Findings from the analytics team will provide more effective marketing, increased revenue potential, better customer targeting and other advantages over business rivals.
At first glance, within the real estate domain, this type of analytics would immediately have been useful to those in the retail trade (landowners, agents and retailers), mortgage bankers and town planners. For those in these trades, the benefits that big data analytics can bring to an organisation using it are not in doubt. However, having spent some time in this area, I feel that several issues have to be highlighted, or we will end up being equally disappointed with the hope that big data analytics is supposed to bring.
For those who are starting off, big data analytics is often of little use if the legacy record-keeping process has been spotty, tardy, poorly classified and/or negligent.
Take, for example, a bank looking to expand its housing loan business. If the database was only recently set up, or even if it was set up four years ago, it is not robust enough to use as a standalone decision tool. Why? Because the database is likely to be populated by entries that bore the old pen-and-paper records which did not anticipate the need to provide more granular classifications.
For example, from paper records that were recently converted into electronic ones, there is a field for the occupation of the mortgagee but it is skimpy on other information. When the analytics team tries to do a stress test under the 2008 global financial crisis conditions and sets out to tabulate the percentage of mortgagees 90 days past due (DPD) in their payments under the class occupation, “pilot” comes out tops followed by “army”. Does that imply that pilots and army regulars are high-risk customers when a crisis hits the economy and thus a more stringent lending criteria should be applied to those in these occupations? This would be extremely presumptuous because the high DPD for both professions could be caused by other factors (which for both occupations can be different factors, too) than the simple case of assigning risk to occupation. For one, during a crisis, why are army regulars, whose jobs are quite inured to the rest of the economy, not paying up on time?
People problem
Big data analytics does not have a problem in itself. It is a “people problem” instead, from those who use the information and those who start applying it when the quality and quantity of data are suspect. On the topic of users, in organisations, line leaders who, because of their client-facing skill sets, tend to be high on the emotional quotient, often do not spend enough time thinking hard about the output from the “dull” analytics team — or implicitly have little faith in the power of analytics, believing that their research leads are not carefully thought out. Also, as big data is mostly about statistics, many line and staff people do not have a good fundamental understanding of the topic. Many a time, if one brings up the noun “standard error”, you will start to see eyes rolling and those around becoming uncomfortable. Yet, these are the same people who studied basic statistics in undergraduate programmes.
Big data analytics is often conducted in an environment without a hypothesis or theory at hand. For some areas of study, particularly in risk management, there is not the luxury to have controlled experiments. The worst thing that can develop from this is that users degenerate to the point that they believe data mining is the means to an end. Can you imagine the possible long-term outcomes if pure coders without a good understanding of statistics, and scientific social and economic theories start taking command of the analytics department of a major institution?
This is not to say that big data is not without benefits. There are; and, say, for the retail trade, big data can be applied even if we do not have any strong theoretical justifications — by mining for nascent consumer trends and then carrying out controlled tests in the way of, perhaps, bringing in new product lines in small quantities for sale and observing the response. How much faith should one place in a certain found correlation or trend will depend on the cost factor. The user can ask questions such as whether it will have a lingering effect on the reputation of the retailer or the mall, or once the event is done with or product line completely sold, whether there is a need for follow-up. Thus, if a retailer or landowner wants to find out whether there is any recent trend that can be exploited in the short term, big data analytics may come in handy for such a fire-and-forget campaign.
Owing to the cost of implementation, we are still at the early stage of adoption, and promoters are just beginning to sing the praises of big data. However, a cavalier attitude towards the use of big data will surely sow the seed for the next big blow-up when faulty data entry, bad classification and/or flippant interpretation lead to risky commitments of substantial amounts of capital or wrong corporate strategies.
For that future problem, the answer lies today; for many are none the wiser, just blindly throwing money at the magic that it can bring. Is it hubris knocking on the door?
Alan Cheong is head of research and consultancy at Savills Singapore. He can be reached at alan.cheong@ savills.com.sg.
This article appeared in The Edge Property Pullout, Issue 724 (April 18, 2016) of The Edge Singapore.