This article was originally published in the September/October 1997 issue of Home Energy Magazine. Some formatting inconsistencies may be evident in older archive content.
| Back to Contents Page | Home Energy Index | About Home Energy |
Home Energy Magazine Online September/October 1997
Differences between HERS and HERSDirectly comparing the accuracy of rating systems based on case studies is almost like comparing apples and attics; each sample of homes and each HERS is unique. Sample differences include average age of the homes, what variables a system considers, what software a HERS uses, and local climate.
The average age of the rated homes is significant. The Kansas homes were almost all new, while the CHEERS houses were significantly older than the other groups. Energy usage in old houses tends to be hard to predict in comparison to newer ones.
Unlike the other HERS, Home Energy Ratings of Ohio allows the rater to input occupant-specific characteristics, such as the number of actual occupants. According to Canadian research, collecting a lot of occupant data, such as thermostat settings, improves the accuracy of the rating for particular occupants. However, it makes the rating less applicable to other potential occupants. Indeed, this is contrary to what many HERS agencies see as the goal of the ratings. In the words of Mark Janssen of Indiana's HERS, We rate buildings, not life-styles.
The CHEERS ratings were the only ones that did not include blower door testing. This may have contributed to their relative imprecision, but it also helped make them less expensive to conduct (see Table 1).
The CHEERS ratings were all performed in 1994, while most others were performed in 1996. Significant progress in ratings systems was made in the interim. The CHEERS ratings were conducted using a rather user-unfriendly DOS-based program that has since been replaced with a more user-friendly Windows version. The simulation engine in the new software is also entirely new.
One of the most significant differences among the different ratings systems is the severity of the local weather, which affects the ease of prediction. Prior research has found that it is harder to predict energy use in mild climates than in severe climates. For example, some homeowners in mild climates will use almost no heat or air conditioning, while some will use a lot. In severe climates, almost everyone uses some heating or cooling energy.
The data seem to show that the take-back effect is more pronounced in mild climates. The California locations have much milder winters than the other locations, and Colorado has the most heating degree-days. California's Home Energy Efficiency Rating System appears to be well calibrated for high-efficiency houses, but it overpredicts considerably for lower scoring houses. Energy Rated Homes of Colorado is well calibrated for medium efficiency houses. Very high-efficiency houses are slightly underpredicted, and very low-efficiency houses are slightly overpredicted. The error changes a lot in California's mild climate, while it remains relatively constant in Colorado.
Different software packages are gradually being made more uniform by a national testing process called the BESTEST. BESTEST rates how well the results of a software package match the results of current industry standards. Other differences among the systems may also fade with time, as national accreditation takes hold. But at this point, HERS bodies cannot even agree on who will do the accreditation--so the differences remain.
Home Energy can be reached at: email@example.com
- FIRST PAGE
- PREVIOUS PAGE