As mentioned elsewhere, wind energy proponents have been eager to show their turbines don’t lower house prices. There have been three major studies sponsored by the industry to make this claim. I say “appear” because to be technical that isn’t the claim they are making. Most of these studies show decreases in value. The claim they are making is that these decreases are not statistically significant, and to the casual reader (which sadly includes most politicians and reporters) it is easy to jump from one assertion to the other. As an example, the REPP Report concludes “…the results point to the same conclusion: the statistical evidence does not support a contention that property values within the view shed of wind developments suffer…”
The REPP Report (also know as Sterzinger, the lead author), titled The Effect on Wind Development on Local Property Values, was published in May 2003. From their web site, “REPP’s goal is to accelerate the use of renewable energy by providing credible information, insightful policy analysis, and innovative strategies”. This doesn’t sound very unbiased to me. This study lumped all the properties from ten different and undefined areas in the U.S. that were each at least 10 miles in diameter into the “close” group. It made use of the industry’s “visual” criterion, and thus ignored the far more important “audible” criterion.
The Report used regression analysis. Before regression is considered valid, it must explain 90% of the price variances, according to international standards. REPP seemed to think 70% was “a good fit”, but very few of their numbers made even that, and none made 90%. There were also problems with timing, as the data window must start several years before construction even starts, as several industry studies show price declines starting as soon as the possibility of a project is sensed by the locals.
The criticisms of REPP are numerous, one example is the Gardner presentation, slides 22-23. Even Hoen, an industry darling, is unsparing in his criticism of REPP.
Hoen was published in 2009 and is the largest study ever made. It included some 7500 sales from 10 areas across the U.S. It has its share of problems, as I have detailed in my Hoen Critique.
Canning was published in early 2010 and studied one area in Canada. Hoen may have had problems but Canning is downright dishonest, as detailed in my Canning Critique.
Other Studies, of varying quality:
Melancthon – from Ontario.