Tag Archives: Ontario

Vyn and Property Values

In January 2014 Dr. Richard Vyn, a professor at Guelph University in Ontario, published a study concluding that wind turbines have no significant effect on surrounding home prices.  The report was published formally in the Canadian Journal of Agricultural Economics in September 2014.  It didn’t attract much attention until fairly recently, when a spate of news articles appeared.  They pretty much all had the same headline, saying that a new peer-reviewed study showed wind turbines had no effect on home prices.

Vyn’s study applied a hedonic approach to studying the issue, much in the same way that previous academic studies have done.  I’ve studied many of them at some length and posted my critiques on this site.  It is important to note that none of these authors are real estate professionals.  Many of them are sponsored by or associated with an institution that has skin in the wind industry.  In every case they use statistical techniques built on questionable assumptions and come up with the results that please their sponsor/associates.

My quick look at Vyn when it first came out told me his report was more of the same, so I didn’t waste much time on it, never bothering with a posting.  But the recent media flurry got me to look at it again, and I can confirm it really is more of the same.

HIS STUDY

Vyn centered his study on the Melancthon project in Melancthon Township, Ontario.  He got sales data from the Ontario assessor, MPAC, for a total of 11 townships surrounding the project.  He looked at two basic effects of wind turbines on the neighbors – distance from the nearest turbine and visibility of the turbine(s).  He split the data up into farm properties and residential properties and analyzed them separately.  He ran a series of regressions that produced a series of sloped lines on graphs, i.e. sales prices vs. square footage.  The slopes of what the paper was primarily interested in, i.e. sales prices vs. distance to a turbine, were all close enough to zero that they “suggest that these wind turbines have not significantly impacted nearby property values“.

It isn’t until you read through the entire 26 pages that you discover that Vyn, to his credit, has what appear to be significant reservations about his study.  Examples:

  1. Page 11/375: These relatively low numbers [in close proximity to turbines] of post-turbine period observations, which may impede the ability to detect significant effects, represent a potential limitation of this study.
  2. Page 24/388: However, while the results indicate a general lack of significantly negative effects across the properties examined in this study, this does not preclude any negative effects from occurring on individual properties.  [Followed by a segue to Lansink]
  3. Page 25/389: While surveys have indicated that residents often perceive that the existence of wind turbines within their viewshed will reduce the value of their property, such perceptions have not often been corroborated by analyses of sales data, perhaps due, in part, to data limitations with respect to sales in close proximity to turbines.
  4. Page 25/389: The existence of limitations in the analysis undertaken in this paper should not be overlooked.

However, overlooking these self-acknowledged limitations is exactly what the media and proponents will invariably do.  One has to wonder how many reporters just read the abstract, not wanting to pay for the entire report.

LETS START FROM SCRATCH

If you were going to try to figure out if wind turbines actually lowered house prices, how would you go about it?  I think most of us would start by setting up a baseline of house prices at different distances from a project at a point in time well before the project was on anybody’s radar.  Then we’d take the current prices and see how they compared.  MLS and MPAC both have this type of historical data, albeit I don’t know how granular it might be.  Regardless, you’d think this would be a starting point.  And this is exactly what Lansink, a real estate professional, did.  But for reasons that we can all speculate upon this is not what academics (i.e. Vyn) and other government-supported researchers (i.e. Hoen) do.

It would have been nice, for example, if Vyn had published the mean sales prices of homes in each of the 11 townships he studied, unmodified by all his statistical manipulations.  While Melancthon Township and the Melancthon project don’t overlap entirely, I’d think they overlap enough for an effect to appear, if there was one.  In Vyn’s particular case, he used GIS to look at sales within different distances of the project itself, so he could have published those averages as well.  But he didn’t, and I have to wonder why.

The most charitable answer is he thought other factors were present that would distort these averages, things like different ages and sizes of houses close to the project when compared with those further away.  Too bad he doesn’t discuss what those distortions might be.  Or perhaps since other academics have been using hedonic techniques he felt his study wouldn’t be looked at by them if he didn’t also.  Or perhaps he did look at those averages and didn’t like what he saw.

The paper is full of detailed and barely understandable (for me, at least) talk about things like spatial correlation, continuous specification, multicollinearity and so on.  It was obviously written for an academic audience and is pretty much an academic exercise.  But what is appropriate in an academic setting isn’t necessarily appropriate for the general public, and Vyn had to know that his study would be used as a bludgeon by wind energy proponents against that public.

It seems that academics and policy wonks tend to think in grand terms.  This type of study seems to imply that property value decreases aren’t worthy of policy consideration unless they are widespread.  And even though both Hoen and Vyn are careful to note the possibility of individual property losses, their main thrust is to dwell on the larger picture and that is certainly what the media and industry care about.  Individual tragedies be damned, no matter how many there are.

PROBLEMS

So right off the bat I’m leery of this type of study.  But there are other problems with how this study was put together and some of the rather basic assumptions that went into it.  I mentioned upstream that Vyn looked at distance and visibility.  Academic property studies that look at visibility are common, generally claiming that “how the turbines look” is an important part of the opposition to them.  Certainly for the larger public visibility is an issue – after all they can be seen for miles.  But visibility as an issue for house prices pales in comparison with noise.

Vyn mentions noise in passing: “While earlier literature also examined the issue of noise, the reduced emphasis on the noise disamenity appears to reflect improvements in turbine technology (Moran and Sherrington 2007).”  Is he kidding?  Using a reference from 2007, when the sizes of turbines were a fraction of what they are now?  Is he so encapsulated on his campus he hasn’t seen, for example, CBC’s (hardly an opposition organization) Wind Rush?  And how did Moran know there were improvements?  He doesn’t say, it is simply an assertion, one that Vyn has carelessly adopted.

I’ll be charitable and say I think the reason so many academics focus on visibility goes back to their thinking on grand terms, where visibility has the potential of affecting large numbers of homes, while noise has a much smaller radius.  Also, it is difficult to write impressive-sounding studies when you’ve got a handful of properties that are rendered uninhabitable (and quite often unsellable).  You can’t do multiple hedonic regressions with just a few points; you’re forced into (gasp!) doing comps and repeat sales, just like Lansink did.  It sounds much better to have thousands of data points, regardless if they convey the reality of what the neighbors are faced with.

And Vyn certainly had lots of data points – 5414 residences.  And, true to form, very few within 5 km of a wind turbine – 123 (I think).  As an aside, another benefit of using visibility is making the radius so large that any house price effects are greatly diluted, often into insignificance.  Which typically makes the sponsors/associates very happy.

But even that wasn’t good enough for Vyn.  He used Melancthon plus the surrounding 10 townships.  And Ontario townships are very large.  Some of his sales are 50 km from the project.  He mentions using them as a control group, but I see no sign that he ever did so in his analysis.  A picture:

vyn-study-area

The yellow line defines his area.  The yellow push pins are the approximate extents of the Melancthon project, while the red line is the approximate 5 km boundary around the project.  That area covers about 300 sq km, or roughly 7% of the total area he studied.  The sales within that area, 123, represent just 2.3% of the total.  Even considering that Melancthon is the least-densely populated township of the 11, that still is quite a low rate.  Vyn was right to comment about how this might skew his conclusions.

Picture one of Vyns ‘sales prices vs. distance’ regression lines stretching from the center of the project to the edge of his study area.  To show an effect, that line would have to be sloped off of horizontal.  Imagine how difficult it is to slope an otherwise flat line when only 10% of it (maybe 5 km out of 50) is subject to the effect you are looking for.

SUMMARY

Sadly the media, government, proponents and industry will all use this study to justify the continuing assault on rural Ontario.  Almost all will do so without actually reading the study, not to mention taking the time to think about what it really says and the basis upon which it was written.  When you have an agenda truth isn’t important; truthiness is.  And this report supplies truthiness in spades.  Whatever truth it supplies is buried deeply enough to not bother wind’s supporters.

LINKS

Vyn studies, sorry I couldn’t find a free copy of the entire report.

For samples of media reports:

And the industry:

My previous postings:

MPAC and Wolfe Island, again

INTRO

Several months ago Stewart Fast, a new professor at Queens University in Kingston, Ontario, undertook a study of why southern Ontario was such a hotbed of anti wind energy sentiments.  His conclusions were interesting, and I’ll be having more to say about them in a future posting.  As part of his study he looked at property values and in particular he looked at MPAC (the Ontario real estate assessors), Wolfe Island and the property assessment reductions thereon.

As it happens, I had also looked at MPAC and Wolfe Island and posted on it about 18 months ago.   It seems that Fast and I used the same FOIA-obtained spreadsheet.  My main conclusion was that there seemed to be a large number of large reductions on Wolfe Island, but there wasn’t enough of a pattern to convincingly tie the reductions to the 86 wind turbines on Wolfe’s west end.

I’ve also posted on MPAC and property assessments in a 4-part series.  My main conclusion, contained in part 1’s section, was that MPAC seemed to be hiding the reductions by lowering the values in neighborhoods that just coincidentally happened to be around wind turbines, but not formally incorporating distance to a wind turbine into their regressions.

What Dr. Fast’s work added to mine was that (1) he was able to group MPAC’s reductions on Wolfe Island by their distance to the nearest wind turbine, and (2) he reminded me of how to use chi-square to test the differences between the bands for statistical significance.  The quick summary is that MPAC has been providing reductions to properties close to wind turbines significantly more often that those further away.  And I’m not using the word “significantly” in some fuzzy qualitative manner – I mean “significantly” in the hard statistical quantitative manner.  In other words, the odds of the getting a wind-turbine-centered pattern just randomly are vanishingly small.  Wolfe Island provides a good hard-to-refute example of how MPAC is finessing the numbers to deny the obvious.

THE DATA

The raw data (i.e. the spreadsheet) is quite detailed, so to save space here’s the summary of it.  There are 4 major areas in the municipality of Frontenac Islands:  Wolfe West (where the turbines are), Wolfe East, Howe Island and Simcoe Island.  The number of properties and total reductions are in the following table.

fi-data-base

As both Fast and I have written, these numbers aren’t really indicative of anything having to do with wind turbine proximity. About the only thing that stands out is that Simcoe Island had a far higher rate than the other areas, which was at least partially due to reasons other than wind turbines.

The next step was what Fast added: he was able to use GIS software to group the reductions into buffers based on the distance to the nearest wind turbine.  He had 5 buffers: < 1km, 1 – 2 km, 2 – 5 km, 5 – 10 km and > 10 km.  He used chi-square to see if there were significant differences between the buffers and found that the 1 – 2 km and 2 – 5 km buffers were significantly more likely to have reductions than the other buffers.  As he said, this may be suggestive but is not quite conclusive.

Dr. Fast graciously provided me the data that went into his buffers and I re-ran his chi-square calculations to make sure I could replicate his results.  Initially I thought the non-significance of the < 1 km buffer (you’d expect the buffer closest to the turbines to show the most significant effect) was due to the income-producing nature of any land close to a wind turbine, plus the setback that rendered about 25% of that buffer unoccupied.  While those could be important, I also noticed that by chance there were a lot of reductions just outside of the 1 km border.  As an example I show the following picture of the reductions around Wolfe’s main city, Marysville:

wi-reductions-wegmap-st-marys

The 1 km buffer ends about where Highway 95 T’s: to the left is inside that buffer while to the right is in the 1 – 2 km buffer.  Since all the buffer borders are fairly arbitrary anyway, I decided to proceed with 4 buffers, with my results below.

fi-data-buffers

As the buffers get closer to the wind turbines you can see that the ratios of reductions to properties generally gets higher.  The chi-square is a test to see if these ratios could simply be due to chance.  The “Chi-2 p” column provides the p (probability) that this buffer varies from the total Frontenac Islands ratio by random chance.  The two buffers closest to the wind turbines have less than a 1% chance of having values that high by chance, while the buffer farthest from the wind turbines has a much less than 1% chance of having a ratio that low by chance.  Note that BOTH close-in and far-away reductions are significantly different from the mean, and in directions that are BOTH consistent with the hypothesis that wind turbines are associated with property assessment reductions.

DISCUSSION

In my earlier posting on the MPAC 2012 study I predicted that they would lower assessments close to wind turbines while never explicitly recognizing wind turbines as the cause.  I offered up Wolfe Island as an example of how this might proceed.  Thanks to Dr. Fast and a fair amount of serendipity we now have a solid indication that MPAC is proceeding as predicted.  While some aberrations in assessments and reductions would be expected (some chi-squares show significance where none really exists), the pattern shown above is just too consistent to be cast aside as coincidental or anecdotal.  We proposed a hypothesis that there would be more reductions closer to the wind turbines and the data clearly support that hypothesis.

Dr. Fast does have a point that this is indirect evidence of lower values.  After all, we don’t have the municipality’s “bufferized” assessed values, not to mention bufferized sales data.  In the 2012 study, MPAC did provide bufferized assessments for the entire province and they show a 25% decrease within 5 km of the turbines, a result that somehow got lost in their summary.  As for actual sales, there have been so few, especially on Wolfe’s west end, that any sort of statistically-valid testing would be difficult.  In the meantime, reductions will have to serve.  That MPAC seems to be going out of its way to hide this trend indicates that MPAC is being used to implement a political agenda – a problem greater than just wind turbine assessments.

Overall, some 22% of the properties in the Frontenac Islands were granted reductions by MPAC.  I’d love to know if that is typical – it would be interesting to study the assessments and reductions in the municipalities with and without wind turbine projects.  Unfortunately, as even Dr. Fast commented, MPAC has made getting their data just about impossible.

In his report Fast says that the evidence of reductions due to wind turbine proximity is “suggestive but not conclusive”.  Given the numbers above you are of course free to come to your own conclusion, but to me they are more than “suggestive”.  Perhaps there are confounding effects from the 2008 melt-down and following economic malaise, but I’d think they would affect the area as a whole.  If it isn’t the wind turbines that produce this rather clear pattern, then what is?

MPAC’s 2012 Study

Last week the Ontario Municipal Property Assessment Corporation (MPAC) released the 2012 version of their continuing study (following one in 2008) of wind turbines and property values in Ontario, entitled Impact of Industrial Wind Turbines on Residential Property Assessment In Ontario.  To sum it up, they still find no evidence that wind turbines cause property value declines.

The study consists of a 31-page main section [backup link] along with 12 appendices.   MPAC seems to have their own language and it isn’t easily penetrated by a layman. I’ve read over it carefully several times and there are still aspects of it that escape me.  The appendices are generally beyond anyone who is not a professional.  On page 4 they state their goals for this version of the study:

Specifically, the study examined the following two statements:

1. Determine if residential properties in close proximity to IWTs are assessed equitably in relation to residential properties located at a greater distance. In this report, this is referred to as Study 1 – Equity of Residential Assessments in Proximity to Industrial Wind Turbines.

2. Determine if sale prices of residential properties are affected by the presence of an IWT in close proximity. In this report, this is referred to as Study 2 – Effect of Industrial Wind Turbines on Residential Sale Prices.

Their two main conclusions, on page 5, are:

Following MPAC’s review, it was concluded that 2012 CVAs of properties located within proximity of an IWT are assessed at their current value and are equitably assessed in relation to homes at greater distances. No adjustments are required for 2012 CVAs. This finding is consistent with MPAC’s 2008 CVA report.

MPAC’s findings also concluded that there is no statistically significant impact on sale prices of residential properties in these market areas resulting from proximity to an IWT, when analysing sale prices.

Actually, there are three parts to this study, with the third contained in Appendix G [backup link].  Early in 2013 one Ben Lansink published a pretty solid study that showed property value declines of anywhere from 22% to 59% and averaging about 37% on residential properties close (all within 1 km) to IWTs, which I posted on at the time.  Apparently Lansink’s work was solid enough that MPAC felt obliged to attack it.

For me to critique all three parts would make for a very long posting, so I’m going to divide it up.  Obviously the details will follow in my subsequent postings, but for the impatient let me summarize below.

Part 1, are MPAC’s evaluations close to IWTs as accurate (equitable, in their words) as those further away?  This section is only of tangential interest to me, as the central question isn’t MPAC’s accuracy, but rather the effect of IWTs on prices.  It seems that, given MPAC’s explanations, their appraisals are accurate.  Still, there are some items in this part that are of interest.  For example, it seems that MPAC has been playing games to get the appraisals to agree with the market while hiding the effect of wind turbines.  They studied turbines 1.5mw and larger, not older turbines and the areas in Ontario where the impact has already been felt.

Part 2, do IWTs have an effect on properties closer to them?  This section is of central interest.  Unfortunately there are only 5 pages in Part 2, leaving lots of details missing.  Things like the sales prices within the close-in areas. MPAC’s major tool for doing mass appraisals (4.7 million in Ontario) is multiple regression analysis and we’ve had lots of experience with how that can be manipulated to obtain the answer your sponsor wants.  Instead of providing us the prices and letting us judge for ourselves what any effects might be, they opaquely run those prices through their regressions and voila! claim there’s nothing to see here!

But whoever wrote Part 2 must not have been talking to whoever wrote Part 1.  On page 18, well within part 1, there’s Figure 2.  It’s purpose there is to show how close the appraisals are to the sales data (the paired blue and green bars) for the different distances from the IWTs.

gulden-mpac-raw-dataNote the blindingly obvious.  Prices (and appraisals) within 5 km of IWTs are substantially lower than those further away.  I’ve added the horizontal lines so we can better determine the values, which are noted to the side.   Michael McCann, among others, has done a number of studies on IWTs and prices, and his overall conclusion is a decline of 25-40%, with almost 100% in some cases.  Does anyone want to calculate the decline from 228,000 to 171,000?  Perhaps the disparity is due to something as simple as the spread between rural and urban properties, but don’t you think MPAC would at least mention something?  Nope.  Nada.

Part 3, what are the problems with Lansink’s study?  Appendix G is more or less readable and provides an excellent example of what David Michaels book, Doubt is Their Product, talks about.  MPAC throws up, by my count, 7 objections to Lansink’s methodology; of which exactly zero actually indicate that Lansink’s numbers are wrong.  Sewing confusion seems to be the most logical explanation.  As an example, objection #4 of the 7 is that for some of the pre-IWT prices Lansink used, gasp!, MPAC’s own appraisals.  Perhaps whoever wrote Appendix G didn’t bother reading the conclusions in Part 1.

There’s more details, of course, in the following postings.

Critique of Part 1

Critique of Part 2

Critique of the Lansink hatchet job

MPAC 2012, Study 1

If you haven’t already please read the summary posting as an introduction.  This is the second of four postings on the MPAC study and covers MPAC’s Study 1.  My third posting, covering Study 2, is here.  And my fourth posting, covering the Lansink critique, is here.

Part 1 of MPAC’s 2012 study asks if MPAC has as equitably assessed properties close to IWTs as properties further away.  This part, although of only tangential interest to wind opponents like myself, occupies the central part of the entire study.  We think the larger question is: do IWTs reduce property values, not whether MPAC is clever and honest enough to correctly recognize those reductions.

MPAC is in the business of mass assessments, nearly 5 million in Ontario.  Given this volume they have no choice but to use computers and computer-friendly techniques to do their assessments.  That translates to a significant reliance on multiple regression analysis.  They determine what sorts of characteristics influence the selling prices and then use the computers to find out how much influence each characteristic has.  In their experience, 85% of the selling price can be calculated using 5 characteristics, or variables: location, building area, construction quality, lot size and age of the home adjusted for renovations and additions.  Note that distance to a wind turbine is not one of their characteristics and MPAC seems determined to keep it so.  But also note that location could be used in lieu of distance – more on this later.

MPAC uses the ASR, Assessment-to-Sales Ratio, to determine if their assessments are accurate.  It is simply the assessment divided by selling price, with a ratio of 1.0 being a perfect match.  MPAC expects ratios between 0.95 and 1.05, and presents what seems to be an endless series of charts demonstrating this, primarily in the appendices.  While obviously MPAC (actually everyone) has an interest in accuracy their emphasis on it seems misplaced in a study entitled Impact of Industrial Wind Turbines on Residential Property Assessment In Ontario, which to me and most residents is quite a different question.

Just think of the ramifications if MPAC decided to include distance from an IWT in their regressions.  I have little doubt it would make Ontario’s lawyers very happy.  It would also put Ontario’s very-pro-IWT ruling party in a difficult political spot.  And don’t forget that the board of MPAC is appointed by the Minister of Finance, who is a member of the ruling party’s cabinet.

Upstream I mentioned that MPAC could use the location variables that already exist in their regressions to finesse their way out of this problem.  I point to Wolfe Island as an example of how this might work.  The western half of WI is now home to 86 IWTs, a project that had been in development since roughly 2000.  If this half constitutes a “neighborhood” then MPAC could reduce the values in that neighborhood in a uniform manner and never have to recognize the elephant in the room.  As it happens, I posted on MPAC’s actions on Wolfe Island about 18 months ago.  In the 7 years when the wind project went from being developed to operational, the roughly 700 properties on Wolfe received the following number and average reductions:

  • 2005/06: 130, 9.3%
  • 2006/07:  33, 15.2%
  • 2007/08:  12, 28.8%
  • 2008/09:  34, 12.4%
  • 2009/10:  44, 29.0%
  • 2010/11:   22, 30.0%
  • 2011/12:  27, 24.0%

That’s a total of 302 reductions, which seems like a rather large percentage of the properties there.

UPDATE – I revisit the Wolfe Island story here.  My suspicions are confirmed.

A Wolfe Island couple, the Kenneys, asked for a reduction which they say MPAC was willing to grant, although MPAC wouldn’t let IWTs be used as the reason.  It ended up in court, and a local paper had a reasonably good account of it.  Perhaps MPAC’s reluctance to admit the obvious is that once they admit it they must then include distance in their regressions and doing that (and the legal and political repercussions) is just too unpleasant.  So they limp along, using the location instead.

Their favored overall chain of logic seems to be: since the ratios in neighborhoods close to IWTs aren’t much different from those further away, and since those ratios indicate their assessments are accurate, and since MPAC doesn’t include distance to an IWT in their regressions, ergo distance from an IWT isn’t a factor in reducing values.  Part 1 of this study is a necessary part of this chain.   So the real main purpose of this part of the study (and the study as a whole) seems to be to publicize MPAC’s skills at keeping the assessments in line with reality, and at the same time deflect how MPAC is going about doing this. MPAC is, after all, in a tight spot.  The reality is that home prices take a dive when close to IWTs.  MPAC somehow has to lower the assessments around IWTs to keep the ASRs in line while keeping their bosses happy.

Unfortunately, the wind industry will be using this study for quite a different purpose – to bolster their argument that IWTs don’t impact home prices in the first place.

MPAC 2012, Study 2

If you haven’t already please read the summary posting as an introduction.  This is the third of four postings on the MPAC study and covers MPAC’s Study 2.  My second posting, covering Study 1, is here.  And my fourth posting, covering the Lansink critique, is here.

Details of Study #2

I fear that this part will be a difficult one for most people to follow, not to mention being lengthy.  Feel free to skip it.  But I think it is important to document what this Study contains, and MPAC made no effort to make understanding it easier.  I recommend you print out Study 2’s  5 pages (pdf pages 26 to 30) and have them at hand as you read this.

The purpose of Study 2 is to “study the effect of proximity to industrial wind turbines on residential sale prices.” In summary, Study 2 finds that “With the exceptions noted above, no distance variables entered any regression equations for any of the other market areas.”  Say what?

It seems that people who are in the business of estimating real estate prices tend to fall into one of two camps.  First are those who make their living providing services to the people who actually own the properties, with real estate brokers being the most obvious examples.  These people tend to focus on one property at a time and generally use comps or repeat sales to obtain their estimates.  Second are those who make their living providing services to people who don’t actually own the property.  Academics and mass appraisers (like MPAC) are the most obvious examples.  These people tend to focus on many properties at a time and generally use statistical techniques like multiple regression analysis to obtain their estimates.  The second class tends to think in terms of rejecting the null hypothesis – you assume there is no difference between two sets (in this case close-in prices and far-away prices) unless you have “statistical significance”.  As a snarky aside, getting to statistical significance in real estate can be quite a challenge, given the wide variance among prices, and can be even more difficult when your sponsor/boss doesn’t want you to do so.

So of course MPAC used their main tool, regression equations that run multiple regression analyses.  They created three new variables based on distance from an IWT  and entered these into regression equations to see if the new variables were statistically significant.  If they aren’t statistically significant they don’t “enter” into the regression equations.  As for the exceptions (which we’ll get to shortly), out of 30 possibly significant variables, only 4 were significant and 3 of them were positive!  Whew!

So right off the bat MPAC is using a tool that doesn’t provide the answers the actual owners of potentially affected properties  really care about.  A binary statistical significance indicator does not provide an answer to the “how much” and “how likely” questions a homeowner is going to have.  In this case, MPAC has skipped through the study so opaquely that I can’t even have much confidence in my critique.  There’s just too many omissions, too many unexplained leaps, too many dangling statements.

There are just 5 pages in Study 2.  The first of these (page 25 of the study) lists the three new distance variables and sets their criteria for statistical significance at either 5% or 10%.  For those unfamiliar with that concept, the significance is a measure of the odds two populations are in fact just randomly part of the same larger population.  In this case, a 5% significance means that there is only a 5% chance that the prices of the close-in homes are the same as the far-away home prices.  In other words, there’s a 95% chance that the close-in prices are different from the far-away prices.  What if there’s only an 80% chance your home value will drop?  Not significant, from MPAC’s perspective.

The second page (page 26) is dominated by Table 9.  For MPAC’s purposes Ontario is divided into 130 “market areas”.  These areas presumably have some common basis that allows them to be treated as a unit for their regression equations.  Unfortunately I couldn’t find where the areas were or how many homes were in each.  Of the 130 MPAC found 15 that had large enough turbines in them to be of interest.  These 15 are listed in Table 9, along with the numbers of sales within each of the 3 distance variables for both pre-construction and post-construction.  MPAC didn’t bother adding them up either horizontally or in total, but I did.  The numbers inside the grid add up to 3136, which would be the total sales within 5 km in all the areas.  But if you add up their numbers along the bottom you come up with 3143.  It turns out that their 142 should be 139 and their 1584 should be 1580.  Now this isn’t much of an error, except that any pre-teen with a spreadsheet and 10 minutes wouldn’t have made it.

At the bottom of page 26 they introduce pre-construction and post-construction periods, and that only two of the 15 have enough sales to test both distances and periods.  Most of the remaining 13 have “sufficient sales within 1 KM to test the value impact within that distance”.  Also that the “sales period to develop valuation ranges from December 2008 to December 2011”. And that Table 10 provides a summary.

The third page (page 27) is dominated by Table 10.  It lists the remaining 10 market areas that presumably have “sufficient sales within 1 KM to test the value impact within that distance”.  2 of these have enough sales to test both distance and periods while the other 8 have enough sales to test just the distance.  For each of the 10 areas MPAC list square footage etc and median adjusted prices.  Are these the prices for the entire area or just within 1 km?  MPAC doesn’t say.  What is the criterion for “sufficient”?  MPAC doesn’t say.  Nor does MPAC include what should obviously be included – both tables.  I suspect they are for the entire area, in which case they are useless for our purposes, at least without the close-in comparison.

Presuming the criteria for inclusion into Table 10 is the 1 km test mentioned on page 26, one has to wonder how 26RR010 and 31RR010 got into it, as Table 9 shows they had zero sales within 1 km.  Snark alert – maybe the missing 7 sales from Table 9 took place in these areas?  And if 1 km isn’t the criterion, what is?  MPAC never says.

At the bottom of page 27 they mention that some sales at the 5 km distance were in urban as opposed to rural market areas and thus were eliminated.  They don’t say how many, nor what their effects on the regressions might be.  They also reiterate their statistical significance levels.

On the fourth page (page 28) they present two more tables, 11 and 12.  Table 11 lists the 8 market areas that had sufficient sales (within 1 km?) to test the distance variables while Table 12 lists the 2 market areas that had sufficient sales to test both distance and periods.  These tables made absolutely no sense to me until I noticed Appendix F.

For all 10 areas they entered the 3 distances and ran their regressions.  In Appendix F they list all the “excluded” variables, in this case all the distance-related variables that didn’t get to statistical significance.  They apparently are called “excluded” since, being “insignificant” they don’t enter into MPAC’s final pricing calculations.  If you look at the “sig” column you will not see any value less than .100, or the 10% significance level MPAC mentioned on pages 25 and 27.  I assume by omission (and that’s all I can do here) that any of the 3 distance variables that are NOT listed in Appendix F are in fact significant.

On my first pass through Appendix F I came up with 6 omitted, and thus assumed significant, variables.  Two of the omissions were for zero sales, for areas that shouldn’t even be there by the <1 km criterion.  But, maybe the < 1 km variable was never even entered on the exclusion listing in Appendix F, so maybe I had erroneously assumed it was not excluded when in fact it didn’t exist in the first place.  So maybe the criterion for inclusion in Table 10 wasn’t significant sales less than 1 km, but rather significant sales less than 5 km out.  Just a typo, right? At least Table 11 now is consistent with Tables 9 and 10.

Finally! Out of the 30 tests (10 areas times 3 tests) I count 4 that are significant.  Those 4 make up the “non-DNE” entries in Tables 11.  MPAC provided absolutely no guidance or explanation about any of this, apparently writing for a very small audience.

Table 12 shows the 2 areas that had enough sales to test both distance and periods.  You’d think that they’d be creating 6 variables for each of them instead of the 3 variables the other 8 areas received.  Looking at Appendix F all you see is the same 3 as everyone else got.  And all of those variables were excluded.  But Table 12 shows 2 of the variables being significant for 26RR010.  Perhaps Appendix F was based on a 5% significance level and Table 12 was based on 10%.  Who knows?

I can only guess that the dollar amounts in Tables 11 and 12 are the effects of being in those areas upon the prices.  So, in the Kingston area (05RR030), if you live within 1 km of an IWT, you can expect the value of your home to increase by $36,435! Very impressive – 5 digit accuracy, especially with a sample size of 7.

Finally, thank goodness, we come to the fifth page (page 29).  It is the Summary of Findings and contains more words than the rest of the Study put together.  This section mostly lists the significant variables and adds some fairly cryptic commentary.

Some Commentary

As I read through and dissected this Study I couldn’t escape the sense that MPAC didn’t want to put much effort into it.  Any narrative or explanations or even public-friendly conclusions are absent.  The tables that are included are ok, once you take the time to figure them out, but what about all the stuff they should have included but didn’t?  Things like the median prices in the areas represented by the 30 variables.  Or an Appendix F1 that shows the included variables, allowing us to see the t-scores etc for ourselves.  Etc., etc.

These missing items cause this Study to be terribly opaque.  I hope my explanation above is accurate, but I can’t be sure due to all the missing items.  Maybe the Study reaches valid conclusions, but I sure can’t verify that.  Perhaps MPAC thinks we should just trust them to be an honest pursuer of the truth.  Sorry, that no longer flies, if it ever did.  You have to wonder, is there some reason other than laziness or stinginess that this Study seems so empty?  In addition to the opacity the Study includes several cryptic items that MPAC never explains.  For example, from the summary, what do these sentences actually mean?

“Upon review of the sales database, it was determined that the IWT variables created for this study were highly correlated with the neighbourhood locational identifier. This strong correlation resulted in coefficients that did not make appraisal sense, and thus have been negated for the purposes of this study.”

If you look at the excluded variables in Appendix F you notice that most of them are named “NBxxxx”.  Probably those are neighborhood identifiers the somehow overlay the market areas.  MPAC never mentions how many there are or what the criteria are for forming one.  But pretty obviously the areas around an IWT could easily coincide with their neighborhoods.  So what gets negated?  Some of the coefficients? All of them?   MPAC provides no further information.

As an aside, I found it interesting to scan over the other excluded variables to see what sorts of things MPAC puts into their regressions.  Many of them make no sense and they seem to vary greatly from market to market.  I can’t help but think of a bunch of regression-heads sitting at their desks hurriedly making up variables and desperately running regressions in an effort to get the ASRs closer to one (ASRs are covered in Study 1).

I’ll leave (thankfully, believe me) this Study behind with the final thought that it seems so slapped together, so opaque, so disjointed that perhaps even MPAC themselves weren’t sure what significance it holds.  Unfortunately, the wind industry won’t care about any of that, and will use this study to continue harming Ontario residents.

MPAC 2012 and Lansink

If you haven’t already please read the summary posting as an introduction.  This is the fourth of four postings on the MPAC study and covers MPAC’s Lansink critique.  My second posting, covering Part 1 of the study, is here.  And my third posting, covering Part 2, is here.

Ben Lansink is a professional real estate appraiser based in Ontario.  In February 2013 he published a study of two areas (Melancthon and Clear Creek, Ontario) where 12 homes all within 1 km of an IWT were sold on the open market.  He used previous sales and MPAC assessments to establish what the prices were before the IWTs arrived and then compared that with the open market prices after they went into operation.  The declines were enormous, averaging above 30%.  The following (thankfully clickable) spreadsheet snapshot gives a good summary of his results.

lansink-spreadsheet

In quite a departure from MPAC’s style, Lansink lists every sale, every price, every time-related area price increase rate and every source.  Lansink establishes an initial price at some time before the IWTs were installed, applies a local-area inflation rate over the period between the sales, and compares the “should-have-been” price with what the actual sales prices was after the IWTs were installed.  In all 12 cases the final price was lower than the initial price, leading to an actual loss on the property.  When the surrounding real estate price increases were factored in, the resulting adjusted losses are even greater.  The compulsive reader might notice that the numbers above vary slightly from Lansink’s.  In order to check his numbers I reran all his calculations in the above chart and there are some rounding errors – like on the order of < $10.  I posted on Lansink’s study when it came out, along with a second posting on a previous version of his study.

These numbers are pretty easy to understand, and for most actual property owners are a hard-to-refute indication of what awaits us should we be unfortunate enough to own property within 1 km of an IWT.  It is powerful enough and inconvenient enough that MPAC felt the need to single it out for a hatchet job, which is contained in the 7 pages of Appendix G.   The first couple of pages are introductory stuff.  Starting in the middle of page 2 they start their critique with, by my count, 7 issues with Lansink’s methodology.  The 7 are:

  1. Lansink uses the local area MLS price index in calculating the inflation rate.  MPAC points out, correctly I guess, that within the MLS local area there could be neighborhood variances that could differ from MLS’s area average.  MPAC has lots of neighborhoods defined (see Appendix F for a sampling) and it would be more accurate to use them.  While more discrete data is generally a good thing, I think most people are quite willing to accept the local area MLS price index as a reasonable proxy.  Besides – how would Lansink obtain MPAC’s neighborhood data?  He used the best that he had, and that best is no doubt good enough for everyone besides MPAC.   As you increase the number of neighborhoods you necessarily decrease the number of homes in each, increasing the chances of distortion by a single transaction.  Issue #5 below will mention this as a problem from the opposite direction.  No doubt if Lansink would have used neighborhoods MPAC would be criticizing him for not using the more reliable area average.  Additionally – how far apart could a neighborhood be from the local area average?  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  2. Lansink used just two points to “develop a trend”.  I have no idea what they are talking about.  Lansink is not developing any trends.  As with neighborhoods, MPAC has more discrete timing adjustments than what Lansink used.  In theory, more discrete data might be more accurate.  In practice, maybe not, due to outliers.  A monthly MLS area average is good enough for, again, everybody but MPAC.  Additionally – how far apart could a their timeline be from the local area average? Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  3. Two homes in Clear Creek have their initial and final sales 8 and 15 years apart and there was likely something changed in the interim, affecting the price.  People are always doing things to change the value of their homes – does MPAC have any indication that something substantial changed in one of these properties?  If not, this is simply idle speculation, designed to instill confusion.  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  4. For the other 5 home in Clear Creek Lansink used MPAC’s 2008 evaluations as the initial price, and MPAC is complaining about that.   MPAC is apparently unaware of how ironic this sounds.  They just finished, in this very study, bragging about how close their ASR’s were to one.  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  5. For the properties in Melancthon Lansink used the buyout prices from CHD (the wind project developer) as the initial prices.  To confirm these prices were at least in the ballpark of local market prices he obtained a local per square foot average price and it compared favorably with the prices paid per square foot by CHD.  Since there was only 4 samples in this part of his study, even one outlier becomes a possible source of distortion and this is one of MPAC’s  “major concerns”.  This seems an odd criticism, coming from someone who relied upon the data in Table 9, with its fair share of single-digit samples.  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  6. MPAC found one house with a basement and since footage in basements is treated differently from footage above ground, this would have changed the square footage price used by Lansink in his comparison with the local average.  Since there are only 4 houses in this sample, it would have moved the average up. MPAC spends the bottom of page 2, all of page 3 and part of page 4 discussing basements and whether they are finished or not.  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.
  7. I’ll quote issue #7 in its entirety so you can fully appreciate it.  “One final issue with the sales used in the Lansink study was that the second sale price was consistently lower than the first sale price despite the fact the time frame being analyzed was one of inflation. The absence of variability in the study make them suspect.”  Suspect?  THESE ARE PUBLIC RECORDS.  There’s nothing suspect about them.  These are facts.  They won’t change.  If they don’t fit your narrative perhaps your narrative needs to change, eh?  Does MPAC provide any indication that this caused an error in Lansink’s conclusions?  Of course not.

These 7 issues are an excellent example of spreading confusion,  hoping that some of it will stick, saying whatever you can come up with to discredit an opponent.  When you’re reduced to spending over a page discussing basements it provides an idea of just how desperate you are.

The second part of MPAC’s critique involves them running their own study of resales to see how it compares with Lansink’s.  They find 2051 re-sales that were part of this same study’s ASR calculations (in Study 1).  They use their more discrete time variables in place of Lansink’s MLS local area averages.  They use multiple regression analysis because “Paired sales methods and re-sale analysis methods are generally limited to fee appraisal and often too tedious for mass appraisal work.” Their conclusion: “Using 2,051 properties and generally accepted time adjustment techniques, MPAC cannot conclude any loss in price due to the proximity of an IWT.”

In spite of the voluminous tables and examples, MPAC leaves some very basic questions unanswered.  Like where were these 2,051 properties located and how were they selected?  There’s no mention of them in the body of the 2012 study.  Over what period were the resales captured?  What were the prices of the close-in re-sales vs the far-away re-sales? Lansink has documented 7 losing resales within 1 km – why does your summary say zero?

MPAC has this habit of expecting us to be impressed with large amounts of data, without divulging where it came from and what filters might have been employed.  Same with throwing all these numbers into a computer and expecting us to uncritically accept the output.  In short, MPAC expects us to trust them to be fully honest, fully competent and fully independent.  I hate to be the bearer of bad news to the fine folks at MPAC, but that trust is no longer automatic for increasing segments of Ontario’s population.  Lansink’s numbers are out in the open and are processed in a way that anyone can verify.  Your numbers suddenly appear and rely upon computers with undocumented processes that always support the agendas of your bosses.  Your methods may be satisfactory to some media, some politicians, some courts and all trough-feeders, but please don’t be surprised that they are not satisfactory to those of us living in the trenches.

Infrasound from Wolfe

Several months ago I splurged for an infrasound detector, with the intention to establish a baseline in several homes on Amherst Island before the wind turbine project becomes operational and then compare them with the levels afterwards.  All of the homes I’m interested in are built close to the shoreline and I figured that waves would be the main contributor.  While I don’t have the ability to record wave action, I do record wind speed and direction from the nearby Kingston airport.  So far (at least on Amherst Island in a rural environment; in my town it varies more with human activity) it seems that wind itself is the main contributor, with direction (and subsequent wave action) not seeming to have much of an effect.  I use the word “seem” as I’m still looking over the data, and will be updating my postings as I gain any new insights.

The home where the detector is now installed is on Amherst’s South Shore, facing Lake Ontario.  Visible from the home, across a sliver of Lake Ontario, is the Wolfe Island Wind Project (WIWP), with the nearest of its 86 turbines 13.8 km away.  I wondered if part of the infrasound I was measuring was due to the WIWP, but there was no way for me to separate out the effects of the wind and the project.

And then serendipity struck!  On December 20th an ice storm swept through the area and the WIWP went offline until December 27th, while the wind kept blowing.  And I was recording the entire thing!  There have been several articles written about infrasound from wind turbines, but I am unaware of any studies that involved a shutdown of anything more than a few minutes, let alone over a week.

Now the numbers.  For the entire month (698 samples, the weather station at the airport shuts down overnight sometimes), the average noise level (in mPa) was 297 with an average wind speed of 10.16 mph.  For the non-iced period (562 samples while WI was running normally) the averages were 306 and 10.18.  For the iced-over period (133 samples, I left 3 transitional hours off) the averages were 262 and 10.14.  The increment comes to 44 mPa, an increase of 17%.  The dB equivalents are 82.3 to 83.7, an increase of 1.4 dB.  Now 1.4 dB doesn’t sound like a lot, but recall that this is an average over an entire week.  Kinda like an average temperature deviation of 2 F doesn’t sound like a lot; however it is enough to get us to remark on it being either colder or warmer than usual.

To show the difference in a more meaningful manner, I charted the noise levels vs. wind speeds (above 4 mph – there’s very little noise below that speed) for the two periods.  Here’s the charts.  First, with the WIWP shut down.  The slope of the line indicates that each mph increase in wind speed produces an average of 41.8 additional mPa of infrasound. noise-v-speed-1402-iced

Second, with the WIWP operating.  The slope is now 54.5.noise-v-speed-1402-normal

These results fly in the face of several studies that claim that wind projects produce very little infrasound.  Wind proponents will, of course, find all sorts of reasons why these measurements are invalid: my science was wrong, my equipment was wrong, my biases were wrong, the winds were wrong, something, anything.  What they won’t do, of course, is cooperate enough to allow an independent researcher to run a similar experiment.

Some details.  The detector was an Infra-20, with a frequency range of 0.1-20 Hz.  It was located in the same place the entire month of December (in an upstairs loft overlooking a large glass-fronted great room, which may explain the relatively high averages).  I used a home-written Perl program on an XP/Dell computer to record the 50 samples/sec that the Infra-20 produces (their Amaseis wasn’t reliable enough – my code has been surprisingly stable; I missed much less than 1% of the samples for all reasons combined during the entire month).  Every hour I’d calculate the average noise level and save that for input to a spreadsheet.  If you’d like the spreadsheet or the program let me know.

Frontline

I don’t watch much TV, but recently while on vacation (in Hawaii!!!) I finished my latest read (Conn Iggulden’s “Conqueror”, which I recommend along with the entire series, for those who like historical fiction) and having nothing better to do (it was raining) I succumbed.  “Frontline” was on, and it was a repeat of their October 8, 2013 broadcast of “League of Denial”.  It is a 2-hour documentary about how the National Football League (the NFL) tried to hide the health impacts that players suffered from repeated blows to their heads.  As I watched I couldn’t help but reflect on the similarities between the NFL’s actions and motivations and the wind industry’s.  Downright spooky, in fact. Continue reading Frontline

Kouwen and Weaver

Last year Dr. Kouwen put together a very competent system for measuring noise and wind speed specifically for wind turbines in Ontario.  His first foray into the field demonstrated that the noise from wind turbine projects routinely exceeds both what they predicted and what the Ontario regulations allow.  More recently he took his equipment to another location and found, yet again, these violations.  There is now an unbroken string of measurements (a sampling: Ashbee, Rand, Shirley, Kouwen, Libby) at homes of complaining neighbors that demonstrates, beyond any reasonable debate, that these noise complaints are caused by (drum roll) noise!  I have yet to see a case of a complaining neighbor where there wasn’t some underlying noise or vibration problem that could be traced to wind turbines.  Weaver continues this string. Continue reading Kouwen and Weaver

The MOE and Libby

David Libby lives in rural Ontario, unfortunately within 700 metres of a wind turbine.  He complained to the Ontario MOE about the noise and in December of 2011 they dispatched some noise and weather-measuring equipment to his home.  Whenever the noise bothered him, he could press a button and a 10-minute detailed recording period would start.  During the 7+ days the equipment was in place he pressed the button 9 times.  The MOE ran off and after a while dutifully reported back that the operator was substantially in compliance.  Libby released that report to the public back in January 2012, which got a posting on Ontario Wind Resistance.   John Harrison then took a look at it,  and now we can see just how complicit the MOE is in harming people in order to protect this industry. Continue reading The MOE and Libby

Kouwen on Models

Humans have used models to describe and predict their environment for millennia.  With the advent of computers the number and sophistication of these models has taken a quantum leap.  Many have proven their worth, and their impact upon our view of the universe has been profound.  Unfortunately, it is almost inevitable that something with this much influence over our affairs will be misused by those whose with a self-serving agenda – much like junk science.

Dr. Nicholas Kouwen, in his study on wind turbine noise, discovered that the models used to predict that noise substantially underestimated it – a most convenient result, given Ontario’s regulatory regime, for the developer who hired the modellers.  In his commentary on why this disconnect occurred he mentioned empirical models and their limitations.  I thought the topic was important enough for a separate posting, and here it is. Continue reading Kouwen on Models

Kouwen on Noise

Of the many issues surrounding wind energy, noise continues to be a controversial topic. The industry and governments continue to insist that wind energy projects are appropriately sited – far enough from the neighbors so they are not a nuisance.  However, around the world the health/nuisance complaints and abandonments indicate that whatever rules are in place are generally not adequate.

Dr. Nicholas Kouwen, a retired engineering professor, had the time and resources to examine the noise issue in some detail for Ontario.  Starting in June of 2012 and going into November he took extended noise measurements at five residences in the Grey Highlands region; three of them within the Plateau Project and two “controls” at locations away from the turbines.  The Ontario wind project noise regulations, pretty much unique in the world, allow more noise at higher wind speeds so he also recorded wind speeds.  He then compared the actual readings with the Ontario limits and it should come as no surprise that those limits were routinely violated. Continue reading Kouwen on Noise

The Eagle’s Nest

By now probably everyone who reads this already knows about the removal of an active eagle’s nest near Fisherville, Ontario, by Nextera.  The tree (a 100+ year-old cottonwood) and nest were apparently in the way of several turbines and Nextera was unwilling to move the turbines or the service road to spare the nest.  The pair have been reported flying around the area looking for the nest.  The Ontario Ministry of Natural Resources (MNR) was responsible for the decision, and one has to wonder if there’s anything they wouldn’t be willing to sacrifice in order to build the turbines.  To date the MNR has approved anything and everything – except anywhere close to the Toronto area where the current government’s supporters mostly live.   Continue reading The Eagle’s Nest

Revisiting the Ontario Health Review

In May 2009 Dr. Arlene King, the Ontario Chief Medical Officer of Health, released a review that has subsequently been used by the wind industry to “prove” that wind turbines are safe.  The King report was one of Chapman’s 17 reviews and it has been cited repeatedly by developers, especially in Ontario.  It is by any standard a real disservice to the health of rural Ontarians in the path of wind energy developments.  I’ve posted on it previously as has the Society for Wind Vigilance.

The money sentence is “The review concludes that while some people living near wind turbines report symptoms such as dizziness, headaches, and sleep disturbance, the scientific evidence available to date does not demonstrate a direct causal link between wind turbine noise and adverse health effects.”  Note the “direct causal” language.  You’d think a Ministry of Health would be concerned about all health issues, not just those that were direct and causal.  Apparently not in Ontario. Continue reading Revisiting the Ontario Health Review

A Tale of Two Homes

I am fortunate (I think) to be able to own two homes: my main residence in Ohio and a secondary home on Amherst Island, Ontario.  One of my great joys (not!) is paying monthly electric bills at both places.  Call me A-R, but I’ve still got all those bills, starting with January 2000.  Finally, my rat-pack tendencies pay off – I can compare Ontario’s electric rates with Ohio’s, and see how they have changed over the 12+ years.  I really feel sorry for ordinary Ontarians. Continue reading A Tale of Two Homes

Lansink and Clear Creek

Ben Lansink is on a roll.  Earlier this month (October 2012) he published the first-ever case study on the effects of wind turbines on property values, based on 5 sales and resales in the Melancthon, Ontario area.  Not content with that, he has just published the second-ever case study on those effects, this time based on 7 sales/appraisals and resales  in the Clear Creek, Ontario area.  The results are depressingly similar, as related in the following (thankfully clickable) chart:

His study is 58 pages long and includes the supporting data from both areas.  For Clear Creek he eliminated (as he did in Melancthon) farm properties and properties with turbines on them.  Of these 7, 6 were homes and 1 was a vacant “bush” lot.  Two of the homes were sold well before the project went into operation and resold well afterwards.  The other five were appraised by MPAC, Ontario’s tax assessor, before the project and then resold on the open market after the project went into operation.

In the Melancthon study Lansink verified that the original sales to the developer were at reasonable market values; in Clear Creek no developer was involved so this step was unnecessary.  In desperation, the wind industry might try to argue that the MPAC assessments weren’t accurate but I wouldn’t hold my breath waiting for them to present any evidence to that effect.  These numbers are hard to refute.

When discussing property values, the wind industry seems fond of statistical significance.  In that spirit, I offer a quick recap of the 12 properties Lansink has studied.  The average decline was 36.99%, with a standard deviation of 12.26.  That calculates to 3.02 SD’s from zero – zero being what the wind industry is claiming.  That, in turn, translates to a 99.87% chance that the wind industry is WRONG.  I’m guessing that those are about the same odds that the wind industry will try to ignore this second very powerful study, and continue quoting the flawed and weak but more agreeable Hoen study.

Lansink on Property Values

There are 3 major techniques used to establish property values, in decreasing order of accuracy:  case studies, paired analysis (aka comps) and regression.  A case study looks at the same properties selling multiple times, a paired analysis uses a similar properties selling at the same time, and a regression study gathers data on all the sales in an area and attempts to figure out how different factors affect the price.

Ben Lansink is a professional Real Estate appraiser based in Ontario.  One of his areas of expertise is property value reductions.  Recently he published a case study [backup copy] containing 5 sales/resales of property in the Melancthon area. These 5 properties were purchased by the developer when it became clear that the previous owners seriously complained and subsequently resold to third parties with industry-protecting covenants on the titles.

The wind energy industry consistently claims that they have studies (i.e. Hoen) that show wind projects do not reduce property values.  Every study they quote uses the weakest technique, regression, to arrive at that conclusion, along with statistical significance.  I’ve written a number of critiques of these studies, all of which have significant problems.  Regression and statistical analysis, aside from being the least accurate, are also relatively easy to game to suit the sponsor, and gaming has been rampant.  Lansink is the first case study I’m aware of.  If the wind industry were serious in discovering the truth about the effect of their projects on real estate values they would adopt the findings of this study.  But when a man’s salary depends on him not understanding something…

As it turns out, I had posted on most of the same properties some time ago, so these reductions are not new news.  What Lansink’s report supplies is a more formal and complete analysis of the sales by a professional.

The study runs 76 pages, the majority of which is the documentation of the sales and resales of the 5 properties.  The summary chart is on page 62, and it tells you pretty much all you need to know (click to enlarge):

In my earlier posting I just had assumed that the sales and resales prices reflected the current market values at the times of the sales – mainly, that the developer made a fair market offer when buying the previous owners out.  Lansink takes the time to compare these sales prices to surrounding prices and finds that, indeed, the developers made what appear to be honest pre-project market value offers to the previous owners.  Since the resales were made on the open market there can be no doubt about their accuracy.  Additionally Lansink factors in the area’s general real estate price increases during the several-year interval between the sales and resales. As large as my numbers were, his are larger.

The result is a very robust study that in any sane world would be adopted by everybody who had an interest in an honest reckoning.  The consequences of having this fine study actually adopted (i.e. by the courts) are pretty painful for the industry.  I’m guessing that the industry will try to ignore it, and if/when forced to confront it, they will mumble something about Lansink being an anti-wind agitator who produces biased and anecdotal evidence.  Never mind that it is far stronger than everything they have been quoting for years now.

Will the Ontario government and legal systems care about this?  I’m guessing not.  Accepting inconvenient facts is not this government’s strong suit.  As I mentioned earlier, this isn’t exactly new news.  No sentient being should be surprised by this.  Unfortunately this government is determined to push these projects in no matter the harm to the neighbors.

Wolfe Island, Property Values and MPAC

Wolfe Island is located at the far eastern end of Lake Ontario and is traditionally considered to be the start of the St. Lawrence River and the Thousand Islands.  There is no doubt it is part of one of the loveliest areas in the world as well as an important area for birds.  No matter; there are now 86 wind turbines on the island’s western half.  For many of the residents on the island this project has been a disaster, and part of their response has been to ask for reductions in their tax assessments from MPAC, the folks who do the assessing for the province.

It is a one-sided contest.  The Kenney’s appeal is instructive.  It was the two of them against a small army of lawyers from MPAC as well as the government.  I understand that MPAC was willing to give them a reduction, but the sticking point was that the Kenneys wanted the wind turbines listed as a cause.   The wind industry (for obvious reasons) really wants to maintain the fiction that wind projects do not affect home prices and even a small breach in that fiction might cause the entire edifice to come tumbling down like the house of cards that it is. Continue reading Wolfe Island, Property Values and MPAC

Cumulative Effects

UPDATE – it turns out that the “Osprey” Conservation Area has nothing to do with ospreys – it was the name of a nearby town.  I’m now trying to find out what sorts of wildlife does live there, and I’ll update accordingly.

A major criticism of Ontario’s approval process for wind projects is that each project is “considered” by itself.  At no point is there any review of all the projects in an area to see if together they represent an environmental issue.  As an example of this problem, while reviewing the maps on my Ontario Wind Turbines site I noticed the area just north of the Melancthon project.  Here’s an enlargable snapshot of the area:

Note the empty space in the middle of those projects?  One has to wonder why nobody is proposing anything for there.  Looking closer, note the “Osprey Wetland Conservation Lands”.  At least nobody is proposing a project for the Conservation Area.  Originally I thought (of course) that resident ospreys had named the Area but I find out that a town did.  Still, whatever uses that area is going to be surrounded.  I wonder if there will be any studies undertaken to discover the impacts.