If you haven’t already please read the summary posting as an introduction. This is the fourth of four postings on the MPAC study and covers MPAC’s Lansink critique. My second posting, covering Part 1 of the study, is here. And my third posting, covering Part 2, is here.
Ben Lansink is a professional real estate appraiser based in Ontario. In February 2013 he published a study of two areas (Melancthon and Clear Creek, Ontario) where 12 homes all within 1 km of an IWT were sold on the open market. He used previous sales and MPAC assessments to establish what the prices were before the IWTs arrived and then compared that with the open market prices after they went into operation. The declines were enormous, averaging above 30%. The following (thankfully clickable) spreadsheet snapshot gives a good summary of his results.
In quite a departure from MPAC’s style, Lansink lists every sale, every price, every time-related area price increase rate and every source. Lansink establishes an initial price at some time before the IWTs were installed, applies a local-area inflation rate over the period between the sales, and compares the “should-have-been” price with what the actual sales prices was after the IWTs were installed. In all 12 cases the final price was lower than the initial price, leading to an actual loss on the property. When the surrounding real estate price increases were factored in, the resulting adjusted losses are even greater. The compulsive reader might notice that the numbers above vary slightly from Lansink’s. In order to check his numbers I reran all his calculations in the above chart and there are some rounding errors – like on the order of < $10. I posted on Lansink’s study when it came out, along with a second posting on a previous version of his study.
These numbers are pretty easy to understand, and for most actual property owners are a hard-to-refute indication of what awaits us should we be unfortunate enough to own property within 1 km of an IWT. It is powerful enough and inconvenient enough that MPAC felt the need to single it out for a hatchet job, which is contained in the 7 pages of Appendix G. The first couple of pages are introductory stuff. Starting in the middle of page 2 they start their critique with, by my count, 7 issues with Lansink’s methodology. The 7 are:
- Lansink uses the local area MLS price index in calculating the inflation rate. MPAC points out, correctly I guess, that within the MLS local area there could be neighborhood variances that could differ from MLS’s area average. MPAC has lots of neighborhoods defined (see Appendix F for a sampling) and it would be more accurate to use them. While more discrete data is generally a good thing, I think most people are quite willing to accept the local area MLS price index as a reasonable proxy. Besides – how would Lansink obtain MPAC’s neighborhood data? He used the best that he had, and that best is no doubt good enough for everyone besides MPAC. As you increase the number of neighborhoods you necessarily decrease the number of homes in each, increasing the chances of distortion by a single transaction. Issue #5 below will mention this as a problem from the opposite direction. No doubt if Lansink would have used neighborhoods MPAC would be criticizing him for not using the more reliable area average. Additionally – how far apart could a neighborhood be from the local area average? Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- Lansink used just two points to “develop a trend”. I have no idea what they are talking about. Lansink is not developing any trends. As with neighborhoods, MPAC has more discrete timing adjustments than what Lansink used. In theory, more discrete data might be more accurate. In practice, maybe not, due to outliers. A monthly MLS area average is good enough for, again, everybody but MPAC. Additionally – how far apart could a their timeline be from the local area average? Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- Two homes in Clear Creek have their initial and final sales 8 and 15 years apart and there was likely something changed in the interim, affecting the price. People are always doing things to change the value of their homes – does MPAC have any indication that something substantial changed in one of these properties? If not, this is simply idle speculation, designed to instill confusion. Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- For the other 5 home in Clear Creek Lansink used MPAC’s 2008 evaluations as the initial price, and MPAC is complaining about that. MPAC is apparently unaware of how ironic this sounds. They just finished, in this very study, bragging about how close their ASR’s were to one. Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- For the properties in Melancthon Lansink used the buyout prices from CHD (the wind project developer) as the initial prices. To confirm these prices were at least in the ballpark of local market prices he obtained a local per square foot average price and it compared favorably with the prices paid per square foot by CHD. Since there was only 4 samples in this part of his study, even one outlier becomes a possible source of distortion and this is one of MPAC’s “major concerns”. This seems an odd criticism, coming from someone who relied upon the data in Table 9, with its fair share of single-digit samples. Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- MPAC found one house with a basement and since footage in basements is treated differently from footage above ground, this would have changed the square footage price used by Lansink in his comparison with the local average. Since there are only 4 houses in this sample, it would have moved the average up. MPAC spends the bottom of page 2, all of page 3 and part of page 4 discussing basements and whether they are finished or not. Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
- I’ll quote issue #7 in its entirety so you can fully appreciate it. “One final issue with the sales used in the Lansink study was that the second sale price was consistently lower than the first sale price despite the fact the time frame being analyzed was one of inflation. The absence of variability in the study make them suspect.” Suspect? THESE ARE PUBLIC RECORDS. There’s nothing suspect about them. These are facts. They won’t change. If they don’t fit your narrative perhaps your narrative needs to change, eh? Does MPAC provide any indication that this caused an error in Lansink’s conclusions? Of course not.
These 7 issues are an excellent example of spreading confusion, hoping that some of it will stick, saying whatever you can come up with to discredit an opponent. When you’re reduced to spending over a page discussing basements it provides an idea of just how desperate you are.
The second part of MPAC’s critique involves them running their own study of resales to see how it compares with Lansink’s. They find 2051 re-sales that were part of this same study’s ASR calculations (in Study 1). They use their more discrete time variables in place of Lansink’s MLS local area averages. They use multiple regression analysis because “Paired sales methods and re-sale analysis methods are generally limited to fee appraisal and often too tedious for mass appraisal work.” Their conclusion: “Using 2,051 properties and generally accepted time adjustment techniques, MPAC cannot conclude any loss in price due to the proximity of an IWT.”
In spite of the voluminous tables and examples, MPAC leaves some very basic questions unanswered. Like where were these 2,051 properties located and how were they selected? There’s no mention of them in the body of the 2012 study. Over what period were the resales captured? What were the prices of the close-in re-sales vs the far-away re-sales? Lansink has documented 7 losing resales within 1 km – why does your summary say zero?
MPAC has this habit of expecting us to be impressed with large amounts of data, without divulging where it came from and what filters might have been employed. Same with throwing all these numbers into a computer and expecting us to uncritically accept the output. In short, MPAC expects us to trust them to be fully honest, fully competent and fully independent. I hate to be the bearer of bad news to the fine folks at MPAC, but that trust is no longer automatic for increasing segments of Ontario’s population. Lansink’s numbers are out in the open and are processed in a way that anyone can verify. Your numbers suddenly appear and rely upon computers with undocumented processes that always support the agendas of your bosses. Your methods may be satisfactory to some media, some politicians, some courts and all trough-feeders, but please don’t be surprised that they are not satisfactory to those of us living in the trenches.