Saturday, January 29, 2011

Residential Geothermal HVAC

A little over a year ago I decided to investigate the possibility of purchasing a new heating system. My oil burning furnace was over 30 years old and the thought of it crapping out in the middle of a cold New Hampshire winter was not pleasant. I started researching my alternatives and quickly started to grow interest in a geothermal HVAC system. Geothermal seems to be something many are interested in but few know about, and the amount of questions I get from those who have learned I've done some research into it prompted me to write this short summary of my personal findings.

A Quick Overview

The basic idea of a geothermal HVAC system is to take advantage of the earth's relatively stable temperature. At a surprisingly shallow depth (~6ft) the earth maintains a moderately consistent temperature year round. The deeper you go, the more stable the temperature is. The actual temperature varies by location, but the ground up my way stays ~49 degrees F. This is much warmer then the average air temperature in the winter and cooler than the average air temperature in the summer, providing a means to more efficient heating and air conditioning.

A typical geothermal system consists of an electric heat pump along with lots of tubing run through the ground to help transfer heat from the earth in the winter (or to the earth in the summer) via a liquid solution which is pumped through the tubing. The larger the home, the more tubing (and land area) required. The way this tubing is run varies depending on where you live. In the good old Granite State, the ground is hard and difficult to push around easily. This is why most systems here are implemented as “vertical” systems. A deep hole or multiple deep holes are drilled straight down into the earth and tubing is spiraled down into and up out of the hole. One advantage to such a system is that by traveling hundreds of feet into the earth, the tubing is able to reach some extremely stable temperatures. A second advantage is that this does not require much property. Vertical systems can typically be installed on small lots.

The other major system type is known as “horizontal.” Here, a bulldozer is used to move some earth and bury some spiraling tube 6-10 feet below the surface. These types of systems are more common in the Mid-West or in areas with softer ground. They tend to be cheaper to install, but require a fairly large plot of land.

A third, though uncommon system type, will lay the tubing in a pond. This requires that you have a large pond on your property, which isn't very likely. If you one of the lucky few, however, this is one of the cheapest ways to implement a geothermal system. Even in a cold winter, the bottom of a large pond won't freeze. It will, however, be colder then the deep earth, so the low cost of installation is countered with lower energy efficiency.

A water/antifreeze liquid solution is pumped around the tubing to an electric powered heat pump, which heats or cools your home. The more heat you can extract from the earth in the winter, or dump into the earth in the summer, the more efficiently the heat pump can run, and the less electricity consumed. These systems are almost always forced hot air systems. They typically can't generate a high enough temperature to work with forced hot water systems, although radiant floor heating is a possibility.

Reliability and Efficiency Factors

The vertical system I was considering comes in two different varieties. Open and closed loop. Closed loop is exactly as it sounds. A small electric pump pumps the liquid solution around a loop of tubing that goes down into the earth and comes back up. The open loop system works like a typical water well (and may even share functionality as the home's well), pumping water up from deep in the ground. The return water is simply returned towards the top of the well where gravity can do its thing.

There is an efficiency curve to these two alternate systems that depends on the home size. Smaller homes are more efficiently heated with a closed loop system because the huge amount of electricity pulled by the well pump (~8 Amps) is avoided. Closed loop systems can operate with very low power pumps that don't have the arduous task of fighting gravity. Large homes are more efficiently heated with open loop systems because such systems generally provide warmer water than can be achieved by recycling liquid continuously around a loop, allowing higher efficiency at the heat pump.

One major drawback to open loop systems, however, is that the well pump is buried in the ground. These pumps can take some abuse (especially if they're running all winter) and will periodically need to be replaced. Of course, it will probably need to be replaced in the middle of the winter, and the cost of replacing a well pump is usually around a few thousand dollars (which should be included in the ultimate efficiency cost of the system). A second drawback is that over a winter season, the greater efficiency offered by an open loop system can start to dwindle as cold water starts to make its way back down to the well. Many times, such systems will include an alternate run off path. Typically controlled by a valve, the alternate runoff allows the home owner to bleed the return water off to a different location for a period of time, allowing fresh warm water to fill the well reservoir. Doing this too long, however, runs the risk of running the well dry.

Having a small home, the closed loop system looked much more attractive to me. In addition to the higher efficiency it would provide for my particular home, I liked the lower maintenance of the closed loop system along with the security of knowing that all moving parts (that could break) would be in my basement and easily (comparatively) repairable. It should be noted that the reliability of the tubing itself is excellent. Your kids' kids' won't have to worry about it decomposing.

Perhaps the most important aspect of a geothermal system is determining the amount of tubing and ground work required for your home. A poor installation can end up being an extremely expensive mistake. If the system is too small, the earth around the tubing will start to cool down in the winter. The heat pulled from the ground will not be able to replenish itself and the system will become effectively useless. At this point, a pure electric heat system typically takes over – Cha Ching! Avoiding this makes a conservatively oversized system sound like a no brainer, but over-sizing the system can reduce running efficiency and can be significantly more expensive to install. Because of these factors, I do not recommend this as a “do it yourself project” and would only recommend going with an established geothermal company with experience and a list of previous customers you can contact. Most of these companies will use a combination of computer software and experience to determine the required groundwork for your house.

The Important Stuff - Money

First and foremost, these systems ain't cheap. My relatively small Cape style home was quoted for a system for just over $20,000. At this time, the government was offering a 30% tax credit towards the cost, but I'm not sure if such a credit is still available.

Most people, myself included, like to think of such huge purchases as investments. So I created a new Excel sheet (actually an OpenOffice Calc sheet) and started attempting to calculate the break even time for my potential geothermal system. Clearly, the upfront cost is significantly higher than other systems, but the energy savings year over year will eventually break even with an alternate oil or natural gas system. After that, the system will start saving money. Easy, or so I thought. The fact is, there are some huge variables here that are almost impossible to calculate or predict.

The first is the actual energy savings. I've found geothermal companies completely unwilling to give me even a gross estimate of the predicted energy costs. The “don't worry I won't sue you if your wrong” line doesn't work here. The best I could get was a list of previous customers with claimed energy savings between 25-75%. That's a huge margin and much of it probably depends on the year. Comparing the savings to a previous year with $5 oil, for example, will show much greater savings than a year with $2.50 oil. As an Electronic Engineer, I know to use the worse case value on a components spec sheet, but using 25% savings greatly increases the length of time required to justify such an investment in comparison to say, 50% or 75%.

Another factor to consider here is the future price of oil / gas. I did some research on the Internet to find that either a) we have tons of oil and if the government just allows us to get it energy will nearly always be cheap or b) we have no oil left and anyone who says otherwise is lying, by 2030 oil will be a bazillion dollars a barrel. I frankly have no idea what the cost of oil or gas will be in the future, but my guess is it will continue to increase either due to supply issues, government mandates and programs or a combination of the two. The rate of this increase can have a large impact on the return on investment of a geothermal system. I also tend to think that going into the future, the cost of electricity will eventually start to fall with new alternate energy developments and this too improves the long term prospect of a geothermal system, though again, this is just a prediction.

One more factor to consider is the value such a system adds to a home's value. Again, there is not much information to go by here. There are so few homes with geothermal systems on the market it makes research difficult. It also seems greatly dependent on the year. On a year when oil / gas is very expensive, buyers may place much more value in a geothermal system then on a year with cheap oil / gas prices. There is also somewhat of an emotional role here, depending on the buyer. The “green” conscious among us may see such a system as very appealing for non-monetary reasons, and may be wiling to pay more because of it.

In the end, an accurate calculation is nearly impossible. I estimated a reasonable expectation of a 10 year break even point, but could see it easily being longer than that. As the home my family and I are currently living in is our starter home, it just didn't seem worthwhile. My wife getting pregnant around this time sealed our decision to hold off. I still think such a system can make sense for many out there however, and hope that for my next home (which I'll most likely live in for a long time) I am able to use such a system. In certain situations, such as when designing your own home from scratch, a geothermal system can be planned in for a much lower cost, such as if it is included as part of the well system. Combined with a solar array to produce electricity, such a system can provide for a nearly complete “of the grid” solution. I like to dream big, but think that realistically this type of setup, or something similar, will be very affordable and even popular within the next few decades.

Cable Shielding

Cable shielding is used to help shield the signal line or lines in a cable from outside interference as well as prevent the signal from radiating noise to the outside world. The shield is usually grounded at one or both sides of the cable. There are those who say you should always ground one side of a cable shield only, at both sides every time, at one side or both sides depending on the frequency of the cable's signal, or at one side with a capacitor, resistor, capacitor and resistor, or some other scheme.

This can make choosing the best shield termination scheme for a particular application difficult. It seems there are more data sources dictating to you the best action to take (or at least the author's opinion of it) without informing you to why, which is really no help at all. While there are no be all end all rules when it comes to shielding termination, understanding a few common points regarding cable shielding can help you make an educated decision for you application.

One Side or Both Sides, What's the Difference?

The main difference between grounding one or both sides of a cable shield is the type of radiation protection offered. Grounding one side of a cable's shield provides electric field protection. Holding the shield at a constant potential makes all electric field noise outside the cable invisible to the internal conductor or conductors. This shielding effect is exactly what we think of when we hear the word “shield” (think of the Star Trek enterprise shielding itself from a Klingon attack).

Single sided shield grounding, unfortunately, provides no protection to magnetic field radiation, which simply plows through our measly single sided shield. Magnetic radiation, in fact, is extremely difficult to shield in a 'blocking' manner without the use of exotic shielding techniques involving very thick shields made of magnetic material.

The best way to deal with magnetic radiation is to use its own force against itself (just as a smart Ninja would do to a larger attacking enemy). This is done by grounding both sides of the shield. Doing so allows magnetic field radiation to push current through the shield, which in turn creates its own radiation focused on the internal conductor or conductors (but in the opposite direction of the original radiation) to cancel out the impact of the noise on the signal. With only a single side grounded, however, there can be no current flow in the shield, and therefor no “shielding” effect to magnetic field radiation.

It is important to note that grounding both ends of a cable has potentially negative consequences. If there is a difference in ground potential at either side of the cable, ground loops can form. The current flowing through the shield due to the ground differential creates radiation which is injected into the cable's signal conductor or conductors, actually adding noise to the signal. So, while grounding a cable at both ends can improve things, it can also make things worse.

It is also important to note that this form of magnetic shielding only works at high frequency (typically above audio frequencies). This is why some recommend grounding one side of a cable for slow signals and both sides for high speed signals. The thinking is that as magnetic shielding is only effective at higher frequencies, there is no point trying to use it for lower frequency signals (and risk the possible negative consequences of ground loop noise). Shielding works both to protect radiation from the signal lines and to protect the signal lines from external radiation, however, so magnetic shielding may still be beneficial for low frequencies signals in certain applications.

One more note on electric field and magnetic field radiation, such radiation is typically only a “near field” (within approximately one wavelength of the emission source) concern. As either electric or magnetic radiation moves further away from the source it turns into electromagnetic radiation, which is effectively dealt with using electric field shielding techniques. This means that by routing a cable away from magnetic radiation sources the need for magnetic shielding can be reduced. Magnetic field noise radiating from the cable however, will still be an EMC concern.
Termination is Everything

When it comes to shielding, the actual connection (or termination) of the shield to ground is extremely important. The connection should be a 360 degree connection (all the way around the cable) and as low impedance as possible. Sometimes, the shield is rolled into a “flying lead” wire at the end of a cable and inserted into a terminal block type connection or tied down with a screw. This is undesirable for multiple reasons. For one, the inductance of the flying lead increases its impedance at high frequencies, killing its shielding effectiveness where it matters most. Secondly, the current flowing through the cable shield tends to bunch up towards the side of the shield that the lead is coming off near the end of the cable, effectively making the exposed or unshielded portion of the cable look bigger from a shielding perspective. Finally, the flying lead connections at the end of the cable leaves part of the signal wire or wires completely exposed without shielding. The best connection is a 360 degree connection directly to the outside of the enclosure. This ensures the entire signal wire or wires are protected and that there is no room for radiation to be carried into or out of the enclosure.

It should also be noted that as a low impedance connection is important, any scheme involving a resistor is a lost cause. Some may, for example, recommend terminating both sides of a cable, but with one side through a resistor (such as 100 Ohms) to help limit ground loops. In order for magnetic protection to work, we need to allow current flow through the shield. A resistor would limit that current flow significantly and in the process limit the shielding effect significantly.

You might be thinking, why not use a capacitor at one end to allow high frequency magnetic shielding while blocking DC ground loop currents. This is a good idea with a difficult implementation. Using a capacitor typically means using a flying lead single point connection between the shield and capacitor, the inductance of which kills the low impedance, high frequency response we were striving for. There are, however, some special (read expensive) cables out there designed with 360 degree capacitive shielding termination at one end.

If you have the money, there are also some more exotic shielding solutions as well, such as double shielded cables (featuring a shield inside a shield). In addition to shielding, there are alternate noise fighting techniques, such as using a balanced signal over a twisted pair of wires, perhaps a subject for a future entry. For now, I hope to have provided at least some insight into the basic physics behind cable shielding.

Genius Engineers, or Don't be Afraid to Ask

Genius Engineers, or Don't be Afraid to Ask

One thing I've noticed at engineering seminars, vendor classes and such, is the total silence that usually follows the speaker's “Does anyone have any questions...?”
Being someone that usually does, this can make me feel a bit intimidated. Am I in a room full of geniuses? If so, why are these other guys (and gals) even here?
Engineers pride themselves on their knowledge. We get paid to think for a living, and asking a question in public demonstrates to the world that we are ignorant of something. But, as the old saying goes, “You'll never know unless you ask,” unless someone else luckily asks for you. Being afraid to ask questions in such situations is similar to being afraid to fail. You may manage to avoid both, but most likely won't get very far in life.

The Heartbeat LED

One of the oldest tricks in the book when it comes to embedded microcontroller based designs is to incorporate a “heartbeat” LED. A heartbeat, indicator, general purpose (or the name of your choice) LED is simply an LED powered by one of the microcontroller's extra pins, either directly through a current limiting resistor or with the help of a transistor if the microcontroller's native voltage / current capability is not enough to drive the LED directly. The heartbeat LED can come in extremely handy as a cheap and simple indicator and diagnostic tool. For extremely cost sensitive designs it can always be removed for production, but I wouldn't recommend laying out a PCB without (at least) one.

A common use of the heartbeat LED is simply to inform the user / operator (or whoever needs to know) that the software is running. This is typically accomplished by blinking the LED slowly (kind of like a heartbeat), usually once or twice a second. The idea is that it doesn't blink so fast as to simply look like a blur or too slow to make the user have to wait around for a while to determine if the software is running. The simple reason blinking (as opposed to steady on) operation is used is to help identify lock ups. If the software happens to lock up with the LED lit steady, for example, and that also happens to be your “running” indicator, you may falsely think the software is still running. By blinking the LED, you can be sure that a steady on (or off with the power LED on - it's a good idea to have one of those too!) means your software has locked up or never started running in the first place.

This brings up an important point related to the actual software implementation. It may be tempting to use one of the microcontroller's timer interrupts to periodically toggle the state of the heartbeat LED pin. This may not be good practice, however, as it could very well keep your LED toggling while your software is locked up in a logic loop, giving you a false sense of normal operation. The code that handles the heartbeat should in someway be tied into the rest of the running software. A simple program with a continuous loop can accomplish this easily by processing the heartbeat state once per loop. If the code anywhere in the loop is to lock up, the heartbeat code will not be processed and the LED will stop blinking.

Things can quickly get more complicated in code that doesn't operate in a loop fashion. In this case, the heartbeat processing can be handled much like how a traditional watchdog is handled (another good embedded practice). A flag can be used to monitor software operation. In this case, using a timer interrupt may actually come in handy. A periodic interrupt could check the heartbeat flag/flags as well as any other diagnostic information and update the heartbeat LED state accordingly.

This opens up the possibility of displaying alternate error conditions. Instead of just blinking or not blinking, the LED could blink at different rates to indicate various states. A faster rate may indicate a certain error condition has occurred, a slower rate may indicate that a low power or sleep mode has been activated.

In addition to varying the blinking rate, the LED brightness can be varied with PWM (pulse width-modulation) of the LED power. If a hardware PWM peripheral is not available on the microcontroller, simple PWM functionality can be incorporated through code. Combining varying blink rates with different LED intensities can lead to an incredibly vast array of message possibilities. To save power, for example, low power modes may chose to use lower intensity brightness with a short duration blink that occurs at a slower interval. A high level error may blink at a fast rate and high intensity to grab the user's attention. Blinking by alternating the LED from full brightness to say 10% brightness provides yet another possible display indicator.

A continuous and gradual variance in the brightness level can also help communicate progress of specific events. Much like a progress bar displays progress in a graphical way, an LED can achieve a similar effect by gradually getting brighter (or dimmer) to communicate a time consuming process. For processes that take a considerable amount of time, it may be worthwhile to repeat this gradual dimming effect at a constant rate until the process has been completed to avoid too slow of a gradual transition from looking like a steady light (which wouldn't be very communicative).

The LED can also come in handy as a crude debugging tool during software development. It can act as a “do we ever get here?” indicator that communicates whether or not you reach a certain point in the firmware. The LED can also be used to indicate various events to give insight into the firmware, such as when a button press has been detected, when a communication signal is being received or transmitted, when an interrupt is active, etc.

One more common use of the LED during initial software development is as a time indicator. If you need to know how long a particular section of code takes to run, simply turn the LED on at the start of the code and off after processing is complete. If the code is too fast to see, loop through it 100 times, or 1,000, or whatever need be, and divide the total processing time as needed. Sometimes a precise time may not be necessary. If you have a continuous running loop, you may just be interested in how large a percentage of the loop time a certain portion of the code uses. With fast loops (too fast to see the LED turning on and off) the brightness of the LED can give you a rough estimate of the processing time. If the LED looks like it is nearly full on, it may be taking up a large percentage of the loop time, nearly off means it probably isn't taking up much time at all. Sometimes a rough idea is all that is required.

The heartbeat LED can be an engineers best friend, providing a vast array of potential information and the reassuring knowledge of always being there, even when other superior tools such as debuggers, oscilloscopes and multimeters are not. It may be easy to look over during design time, but don't forget it. The first time you do will be the time you need it most! Also, don't limit your use of the heartbeat LED to some boring on/off scheme, it offers a tremendous amount of information conveying potential.