No matter what anyone tells you, there are no certainties when it comes to real estate (and the more someone tells you there are the more likely they are trying to sell you something). That is why the standard practice in the real estate industry for decades has been to create and test models. This is a concept taken from the scientific community. While scientists can form their theories based on empirical evidence ultimately they have to test them. In order to do that they codify their observations into a mathematical prediction and test different them with varying inputs. By comparing models to what actually happens in the real world we can determine what was miscalculated or overlooked in the model.
This is essential the same process that commercial real estate teams do when trying to understand building value. Buildings are basically cashflows (or at least should be). Their value is derived by the current and future revenues minus expenses. Seems easy right? Well, not when you have a building with ten tenants whos leases all have different price points, escalations, clauses and expirations. Oh yeah, and don’t forget that things in the building are always breaking or needed to be upgraded. Plus, who is to say what prices you will be able to get for a space once the leases are up? No one knows the future, remember?
So, real estate analysts often create valuation models. They try to plug in all of the variable contributing to a building’s cashflow and test it with different scenarios. Some teams will simply calculate pessimistic, neural and optimistic scenarios to try to better understand their cost benefits analysis. Others use advanced computing techniques to crunch a dizzying amount scenarios with every possible variable permutation. This is where the art of the science of real estate finance comes in. For many, this is their competitive advantage and no two teams do it exactly the same.
What complicates this already complicated process is that it relies on variables from many different sources. Rent rolls, lease terms, market data, industry benchmarks, economic forecasts, and anything else you would want to throw into the model often each come from a different sources in different formats. For many teams that means a lot of time is spent putting the data in the model rather than working on the model itself.
This is something that the PropTech world is thinking about in depth, none more than Altus Group, creators of the most popular commercial property valuation software in the world, ARGUS Enterprise. They did an incredible amount of research when they were developing their cloud functionality. James Hutton, a project manager at Altus Group, told me, “Our research to date has highlighted the fact that no two customers will model their portfolios in the exact same way. They will adopt modelling conventions that suit the unique needs of their business. This can make it challenging to develop “out of the box” integrations between two commerical real estate applications.”
So what can be done? Here is James’ opinion: “There are two ways to handle this. Either customers can standardise on modelling conventions, or we can develop flexible integrations that can accommodate a variety of conventions including the ability to structure, map and update data properly.” He explained that there are pros and cons to each approach. Standardization constrains the way customers can model their data, whereas the customization adds complexity and requires often competing companies to integrate with each other. James thinks that we will need a bit of both, “we’ll need a certain amount of standardisation while allowing integrations to be tweakable to the needs of individual customers,” he said.
To facilitate this shift from many unconnected data sources on any property model, to a more streamlined approach, Altus Group created the ARGUS API then reached out to other leading solution providers to utilize the API as the foundation for building data connections between applications. This will allow solution providers an easy way to plug into their modeling software and put an end to any manual data entry and all of the wasted time and errors that it produces.
How models are built and tested and how their analysis contributes to large property decisions is at the heart of any real estate organization. We all think differently, all have different (likely wrong) predictions. Therefore, we all model differently, as well we should. Luckily, the people creating the software that we use for our models is listening. As we make modeling more of an automated and less of a manual process the result will likely be much more sophisticated models and, hopefully, the ability to predict the future at least a little bit better.