ARCHIVE CONTENT

This article was originally published in the November/December 1992 issue of Home Energy Magazine. Some formatting inconsistencies may be evident in older archive content.

 

 

| Back to Contents Page | Home Energy Index | About Home Energy |
| Home Energy Home Page | Back Issues of Home Energy |

 


 

Home Energy Magazine Online November/December 1992


TRENDS IN ENERGY

 

 


Trends in Energy is a bulletin of residential energy conservation issues. It covers items ranging from the latest policy issues to the newest energy technologies. If you have items that would be of interest, please send them to: Trends Department, Home Energy, 2124 Kittredge St., No. 95, Berkeley, CA 94704.

 


Temperature Data Concerns in Short-Term Metering

Short-term evaluations of weatherization programs are becoming increasingly popular because they produce timely results and are less prone to the sample attrition problems often associated with methods which require two years of utility data. Yet experience with low-income weatherization evaluations in Virginia (See A Warm Wind Blows South HE Jan/Feb '92) and Indiana have raised questions about both the quality of the temperature data typically used in short-term monitoring approaches, and how they are used.

One common short-term approach uses elapsed timers to record furnace run time. A typical protocol involves installing the run- time meter five or six weeks before the scheduled retrofit, at which time the furnace firing rate and heated area of the house are also determined. The weatherization agency then calls once a week, asking the occupant to read the run-time meter over the phone. The run time for the week is then multiplied by the furnace firing rate and the product is divided by the heated area and the heating degree-days (HDD) for the week, to get the energy intensity of the house in Btu/ft2-DD. This is typically done for a six-week pre-retrofit period and again for a six-week post-retrofit period. The savings, or more precisely, the change in energy intensity, is then calculated from the difference between pre- and post-retrofit measurements.

More Than Checking a Thermometer

Program managers typically obtain the temperature data used in this approach from local newspapers or directly from local weather stations. In fact, proponents of run-time metering sometimes tout the ease of using local temperature data as an advantage of this short- term approach over methods such as the PRInceton Scorekeeping Method (PRISM) which require large temperature files that must be carefully prepared and continually updated. The assumption is that a local data source, by virtue of its closer proximity to the house under consideration, produces more accurate results. But data from Virginia and Indiana do not support this assumption. The apparent advantage of local temperature data is outweighed by its poorer quality.

In Indiana, when heating degree days are plotted against time for the major weather stations, the plot curves follow very similar patterns, with the more northern stations plotting a consistent number of degree-days above stations to the south. However, data from local stations do not plot nearly as neatly, and the difference is often significant. Figure 1 shows heating degree-day data for a local weather station compared to data from two major weather stations located approximately 60 miles north (Ft Wayne) and south (Indianapolis) of the local station (Muncie). Accuracy would be sacrificed rather than gained by using the local weather station in this case (see Figure 1).

This is not an isolated occurrence: Data from six other local Indiana weather stations in the same vicinity show similar variations from the close-order tracking of the two major stations. Better data quality can be expected from the larger stations because they use continuously recording thermometers, whereas local stations typically only have thermometers which record minimum and maximum temperatures and are read only once a day. Further, the recording equipment at a major station is in a carefully designed and maintained housing to minimize extraneous influences, and receives routine maintenance and calibration. Additionally, technicians routinely check and verify the data from the major stations.

The difference between data from a continuous recording thermometer and a minimum/maximum thermometer read once a day, can be significant. For example, data from a local station using a minimum/maximum thermometer read each morning at 7:00 a.m. will in most cases record yesterday's maximum temperature for today's date. This slight mismatch would probably not be noticeable with monthly data, but can introduce errors when using weekly data.

What do you do when you run out of Winter?

The quality of the temperature data is not the only concern. Weeks without enough heating degree-days can cause problems, as the relationship between heating consumption and degree-days has been found not to hold in these warmer weeks. Factors such as mass effects, solar and internal gains, and occupant behavior all become relatively more important in weeks with few heating degree-days. Unfortunately, such weeks occur all too often in short-term metering studies. The typical research design calls for the pre-retrofit period to fall in the late fall or early winter, followed by the post-retrofit period in the spring. There is usually no problem with the pre- retrofit data; if the weather is too warm, the agency postpones the retrofit until it collects sufficient data. However, this slippage in the schedule can easily push the post-retrofit into March, which, in many climates, contains weeks without enough heating degree-days.

What constitutes a week with an insufficient number of heating degree-days? Our work suggests that agencies should use caution when using weeks averaging less than 20 heating degree-days per day. If weeks such as these must be used, the program manager should consider computation methods which weight degree-days equally, rather than the above approach which weights weeks equally.

Another factor is reference temperature. The usual approach for calculating savings using run-time data is to use heating degrees calculated using a base of 65 degrees F. Our research suggests that a small error in reference temperature can result in large errors in pre- and post-retrofit energy intensities. Moreover, this potential error is larger in weeks with few heating degree-days, i.e., in weeks in which the average daily temperature is close to the reference temperature. An approach which finds the best reference temperature for each house should improve the accuracy of run-time metering results.

One technique, suggested by Michael Blasnik of Grass Roots Alliance for a Solar Pennsylvania (GRASP), is to compute the average energy intensity for a wide range of possible reference temperatures. The best reference temperature is the one that yields the smallest relative standard deviation of the weekly energy intensities. This technique is relatively easy to apply using a spreadsheet, assuming that temperature data for only a few major stations are used. In the Indiana study, this method of selecting the reference temperature resulted in significantly tighter confidence intervals (more precise results). It appears to be an especially useful approach if the data include weeks with few heating degree-days.

-William W. Hill

Bill Hill is a senior researcher with the Ball State University Center for Energy Research, Education, and Service in Muncie, Ind.

 


Figure 1:

 



| Back to Contents Page | Home Energy Index | About Home Energy |
| Home Energy Home Page | Back Issues of Home Energy |

 


Home Energy can be reached at: contact@homeenergy.org
Home Energy magazine -- Please read our Copyright Notice

 

  • 1
  • FIRST PAGE
  • PREVIOUS PAGE
  • NEXT
  • LAST
Email Newsletter

Home Energy E-Newsletter

Sign up for our free monthly
E-Newsletter!

Harness the power of
HOME PERFORMANCE!

Get the Home Energy
e-newsletter

FREE!

SUBSCRIBE

NOW!