Wednesday, February 16, 2011

It's All About the Interface

I've previously mentioned the Ether-IR project I'm working on. Part of the motivation for this project is the horrible operation of our current, somewhat expensive RF (radio frequency) A/V (audio/video) remote. The remote sends an RF signal to a module inside our "entertainment closet" which then broadcasts an infrared signal to the A/V devices in the same closet. This allows us to hang the LCD TV on a wall and hide the A/V equipment away from sight, but the remote's operation is marginal at best. Sometimes, such as when our Maytag dishwasher is running, the remote doesn't work at all as it gets drowned out in RF noise emitted by the dishwasher (a quick check found that most home appliances are exempt from FCC regulations). Enter the Ether-IR, which will work over the home network, including our reliable Wi-Fi network (which is immune to dishwasher noise!). The Ether-IR will also allow us to get rid of yet another device (the remote). Our laptop, Wi-Fi enabled phone, etc., can take over the remote's functionality.

So, when I had the major functionality of my Ether-IR set up and running, I proudly showed my wife and informed her that her channel changing woes were over. I was then expecting her to jump into my arms lovingly, wooed by her 'hero engineer'. Instead, I got a "eh, I don't really like how the interface looks." I made sure to inform her the current interface was just a proof of concept, and that I would pretty it up, but the reaction seemed pretty typical based on previous projects I've worked on (she did then say the functionality was great – but, she has to). Great functionality can be meaningless without a decent interface to go along with it, and it seems that people's expectations of human-device interfaces are constantly increasing.

Eh Interface

A recent project at work involving a color display has had me doing some research on user interface design. One of my favorite Author's on the subject is Niall Murphy, who routinely writes about how to design intuitive user interfaces and gives some great examples of what not to do by reviewing some poor user interface designs. An interesting message from Niall is that, if possible, engineers should not be put in charge of designing user interfaces. Apparently, we think differently than "normal" people and tend to design interfaces BEFE (by engineers for engineers) that may not be anywhere near intuitive for the average user. We may also tend to play down the importance of a good user interface and focus more on the products functionality. As with anything, however, I think a little education can go a long way in turning engineers into good user interface designers (aren't user interface designers just another type of engineer anyways?)

As time goes on and people seem to be coming ever more conditioned to expect great interfaces such as put out by Apple and other commercial electronic companies (probably with entire user interface departments at their disposal) expectations are going beyond merely intuitive and properly functioning interfaces to include good looking, attractive interfaces. Looks don't seem to be much of a concern to Niall (just look at his website) but deserve some attention. We are all visual beings, and an attractive interface can go a long way in improving a user's experience. Even if it can't do anything to cover up a poorly designed, unintuitive interface or a product with sub-par functionality, it could easily be the deciding factor between two otherwise equal products.

So in addition to the other hats an engineer must wear, it seems like at least some study in user interface design, and maybe even in artistic design, could be very valuable. A redesign of the Ether-IR interface (with my limited web-design knowledge), as seen below, improved the interface greatly (and I am now my wife's engineering hero). It's still a work in progress, however, and if anyone has any suggestions on improving the interface, I'd love to hear them.

 Updated Interface


Thursday, February 3, 2011

OpenOffice Spell Check Fail + Inventive Spelling Fail - Rant

After installing the newest version of OpenOffice, my default language (English USA) was deselected for some reason. After typing up a brief summary for a meeting and hitting the spell check button I was surprised to find that I had 0 errors! Wow, I thought, I'm really improving...

I think this was a Fail by OpenOffice on two fronts:
  1. The new version should have adopted the same language as the previous version and
  2. if no language is selected, spell check should report that in a message, not just proclaim no errors were found.

Luckily, I passed the paper off to a coworker for review before the meeting because while I am improving my spelling, I was lured into a false sense of confidence by the spellchecker and missed some obviously incorrect words, which brings me to the second part of my rant.

Inventive Spelling had to be one of the worst ideas ever. The theory was that it is emotionally hurtful to be wrong. So instead of "punishing" kids for spelling words wrong in elementary school, they'd just let us "invent" a spelling of our choice. This would allow us to be more creative writers who weren't shackled by our limited spelling ability. It reality, it just produced a bunch of emotionally handicapped horrible spellers conditioned to be too lazy to look in a dictionary (there was no negative consequence anyways).

Oh well, hopefully writing this blog, among other things, will allow me to reach the point where I may one day become a human spell checker!

4 States of a Push Button

The push button is commonly used in all sorts of electronic projects. One of the first lessons in interfacing a microcontroller to the outside world (after blinking an LED) is to sense a push button state. This is where the harsh realities of the analog world (as opposed to the relatively perfect digital software world) are introduced through denouncing. Jack Gansell has a classic report on debouncing that I won't attempt to add to, other than to say you should read it if you haven't already.
The binary bit-shift method described in this paper offers a classic, simple and efficient way to debounce buttons and switches.

One thing I'd like to add about interfacing with buttons in general, however, is that there are actually 4 states to a button (not just pressed or not pressed).

1) Pressed Edge

2) Pressed

3) Released Edge

4) Released

I'd recommend including a state machine capable of detecting all 4 states for every project with a button, even if you don't think you need all of them up front. The logic takes very little program space, adds little in terms of processing time, and allows you to quickly add new button related functionality as the project progresses. 

You may use a push button to increase volume, for example, in which the volume is incremented by a set amount every time the button is pressed. In this case, all you want to detect is the Pressed Edge of the button. If you were to just check the button input state (0 or 1) instead of looking for the Pressed Edge, for example, a single press may lead to 25 volume increase commands depending on the length of a button press and the button input sampling rate.

Later, however, you may decide that holding the button down for more than 0.5 seconds should cause the volume to gradually increase until the button is released. If you've implemented the 4 state button interface from the start, this modification is trivial. Now you can use the Pressed Edge state to increase the volume by one increment and monitor the length of the Pressed state to allow for a gradual automatic volume increase.

Implementing all 4 states actually only requires the observation of the Pressed Edge and the Released Edge. One state after the Pressed Edge is detected the state becomes Pressed until one state after the Released Edge is detected, at which point the state becomes Released. This allows for noise (random bits) in either the Pressed or Released state to occur without screwing up the state machine.

Software button interfacing can be surprisingly complicated, but I've found this method to be very intuitive as well as a good way to future proof your code.

Wednesday, February 2, 2011

Open Source Ether-IR project, 1st Milestone


I'm currently working on an Open Source project that is essentially a low cost Ethernet based learning remote. The idea is to put this thing in a closet or somewhere tucked away with all your Audio/Video equipment, and allow the use of any Ethernet capable device to control your TV, DVD player, cable box, etc. I'm hoping to get a slick iPhone interface made up so you can use a Wi-Fi enabled phone as a remote. I got Microchip's Ethernet stack up and running and last night I was able to record and broadcast the volume up signal from my TV remote, so now I can crank up the volume from my laptop (at least until I power off and the memory is erased – still haven't implemented EEPROM saving yet). As the saying goes, 90% there, 90% left to go.

Tuesday, February 1, 2011

Full Design Gamut Position

A recent episode of the EEVBlog mentioned some new engineering graduate's frustration with real world engineering, and asked about the reality of full design gamut jobs out there. As someone who has such an engineering job at a relatively young age (still in the 2's) I thought I'd share the pros and cons to working in a small team on projects, sometimes as the sole engineer.

Design Freedom

Perhaps the biggest pro to such a position is what engineers first think of, design freedom. There is never the problem of too many chefs, or of having every design decision dictated down to you. At the end of the day I'm still a design monkey (implementing another person's idea), but being involved in the design meetings does allow me to voice my opinion about creative ideas and potential product features. When it comes to design freedom, I'd say there is nothing like doing your own thing (such as for an Open Source project), but even when designing for someone else, having so much freedom goes a long way to fulfilling the creative bug that most of us engineers have.

Bubble World

The drawback to this design freedom is the lack of fellow engineers to bounce ideas off of, discuss design philosophies with, share experience, etc. This may not be the case if you have a full gamut design position within a larger engineering company, but it seems more likely that such positions will be in small companies. Smaller companies also usually offer less in the way tools, equipment and software. This probably isn't as big a deal if you are further along in you career, but in your younger years, there may be some missed opportunities soaking up the knowledge other engineers have garnered over years of experience. The hands on experience from carrying out and testing designs, however, certainly provides for some great experience in its own right.

Paper Work

The paperwork associated with professional engineering doesn't disappear. Bills of materials, manuals, assembly drawings, final test procedures... all still need to get done. In fact, instead of escaping this type of work, there is probably an even larger burden of it that falls on the shoulders of the designer in a one man gig.

Knower of All, Master of None

Coming up with a design architecture, going through the component research, designing the analog and digital circuitry, writing the software, testing the hardware and software, etc., means it's tough if not impossible to get bored from tedious work. It's definitely an ideal position if you like a change of scenery every once in a while. This also means there is no escaping or passing off any portion of the design, even things you may not find very exciting (e.g. reading through some European Union enforced safety standard). A drawback to this is that it is more difficult to specialize in one aspect of design. When I'm working on software, for example, I find myself buying and reading software books, taking online software classes to further my knowledge, etc. When I'm working on the hardware portion of the design, I'll probably be spending more free time studying the hardware side of things. The fact is, there are probably many more jobs out there for larger corporations looking for specialized skills. If I had to find a new job, I might be competing for a firmware position with firmware gurus who spend all their time (or the majority of it) studying the software side of things, or competing for a hardware position with people who ignore software and devote their energy to becoming hardware experts. I personally tend to think that specialization is over-rated. Technologies can change so quickly that the ability to adapt and learn and utilize new technologies is more valuable than just mastering one specific skill and staying within that comfort zone, which is why this doesn't worry me much. Also, many times experience in multiple disciplines can improve your skills across those disciplines. Still, in addition to mastering the fundamentals, I think it is probably a good idea to try and specialize at least to a small degree in one general area. Most people have a preference or a particular stage of the design flow that they may find more enjoyable or interesting, and so this usually natural occurs anyway. But depending on your point of view in the specialization versus generalization debate, this is something to consider before applying for a one man gig type job.

Responsibility

Releasing a product to the world can be very rewarding. It can also be somewhat stressful. If there are any issues that sneaked through testing and start popping up in the field, there is no 'shared responsibility'. If you are the type of person who stresses easily, or who does not like having a lot of responsibility, such a position is probably not for you. I find the best way to deal with this is through extremely thorough testing. I try to switch my brain into a completely different mode in the test phase, acting as if I have no idea who the designer is, and that I don't trust him. He must have made some mistakes, and they need to be found. Slower releases, with trial runs at a customer's location with whom you have a good relationship with can be helpful here. It is also necessary to be assertive if being pushed to release the product before it is completely tested. It is better (for your reputation and the companies) to release a fully tested and verified product late than to release a faulty product early.

Saturday, January 29, 2011

Residential Geothermal HVAC

A little over a year ago I decided to investigate the possibility of purchasing a new heating system. My oil burning furnace was over 30 years old and the thought of it crapping out in the middle of a cold New Hampshire winter was not pleasant. I started researching my alternatives and quickly started to grow interest in a geothermal HVAC system. Geothermal seems to be something many are interested in but few know about, and the amount of questions I get from those who have learned I've done some research into it prompted me to write this short summary of my personal findings.

A Quick Overview

The basic idea of a geothermal HVAC system is to take advantage of the earth's relatively stable temperature. At a surprisingly shallow depth (~6ft) the earth maintains a moderately consistent temperature year round. The deeper you go, the more stable the temperature is. The actual temperature varies by location, but the ground up my way stays ~49 degrees F. This is much warmer then the average air temperature in the winter and cooler than the average air temperature in the summer, providing a means to more efficient heating and air conditioning.

A typical geothermal system consists of an electric heat pump along with lots of tubing run through the ground to help transfer heat from the earth in the winter (or to the earth in the summer) via a liquid solution which is pumped through the tubing. The larger the home, the more tubing (and land area) required. The way this tubing is run varies depending on where you live. In the good old Granite State, the ground is hard and difficult to push around easily. This is why most systems here are implemented as “vertical” systems. A deep hole or multiple deep holes are drilled straight down into the earth and tubing is spiraled down into and up out of the hole. One advantage to such a system is that by traveling hundreds of feet into the earth, the tubing is able to reach some extremely stable temperatures. A second advantage is that this does not require much property. Vertical systems can typically be installed on small lots.

The other major system type is known as “horizontal.” Here, a bulldozer is used to move some earth and bury some spiraling tube 6-10 feet below the surface. These types of systems are more common in the Mid-West or in areas with softer ground. They tend to be cheaper to install, but require a fairly large plot of land.

A third, though uncommon system type, will lay the tubing in a pond. This requires that you have a large pond on your property, which isn't very likely. If you one of the lucky few, however, this is one of the cheapest ways to implement a geothermal system. Even in a cold winter, the bottom of a large pond won't freeze. It will, however, be colder then the deep earth, so the low cost of installation is countered with lower energy efficiency.

A water/antifreeze liquid solution is pumped around the tubing to an electric powered heat pump, which heats or cools your home. The more heat you can extract from the earth in the winter, or dump into the earth in the summer, the more efficiently the heat pump can run, and the less electricity consumed. These systems are almost always forced hot air systems. They typically can't generate a high enough temperature to work with forced hot water systems, although radiant floor heating is a possibility.

Reliability and Efficiency Factors

The vertical system I was considering comes in two different varieties. Open and closed loop. Closed loop is exactly as it sounds. A small electric pump pumps the liquid solution around a loop of tubing that goes down into the earth and comes back up. The open loop system works like a typical water well (and may even share functionality as the home's well), pumping water up from deep in the ground. The return water is simply returned towards the top of the well where gravity can do its thing.

There is an efficiency curve to these two alternate systems that depends on the home size. Smaller homes are more efficiently heated with a closed loop system because the huge amount of electricity pulled by the well pump (~8 Amps) is avoided. Closed loop systems can operate with very low power pumps that don't have the arduous task of fighting gravity. Large homes are more efficiently heated with open loop systems because such systems generally provide warmer water than can be achieved by recycling liquid continuously around a loop, allowing higher efficiency at the heat pump.

One major drawback to open loop systems, however, is that the well pump is buried in the ground. These pumps can take some abuse (especially if they're running all winter) and will periodically need to be replaced. Of course, it will probably need to be replaced in the middle of the winter, and the cost of replacing a well pump is usually around a few thousand dollars (which should be included in the ultimate efficiency cost of the system). A second drawback is that over a winter season, the greater efficiency offered by an open loop system can start to dwindle as cold water starts to make its way back down to the well. Many times, such systems will include an alternate run off path. Typically controlled by a valve, the alternate runoff allows the home owner to bleed the return water off to a different location for a period of time, allowing fresh warm water to fill the well reservoir. Doing this too long, however, runs the risk of running the well dry.

Having a small home, the closed loop system looked much more attractive to me. In addition to the higher efficiency it would provide for my particular home, I liked the lower maintenance of the closed loop system along with the security of knowing that all moving parts (that could break) would be in my basement and easily (comparatively) repairable. It should be noted that the reliability of the tubing itself is excellent. Your kids' kids' won't have to worry about it decomposing.

Perhaps the most important aspect of a geothermal system is determining the amount of tubing and ground work required for your home. A poor installation can end up being an extremely expensive mistake. If the system is too small, the earth around the tubing will start to cool down in the winter. The heat pulled from the ground will not be able to replenish itself and the system will become effectively useless. At this point, a pure electric heat system typically takes over – Cha Ching! Avoiding this makes a conservatively oversized system sound like a no brainer, but over-sizing the system can reduce running efficiency and can be significantly more expensive to install. Because of these factors, I do not recommend this as a “do it yourself project” and would only recommend going with an established geothermal company with experience and a list of previous customers you can contact. Most of these companies will use a combination of computer software and experience to determine the required groundwork for your house.

The Important Stuff - Money

First and foremost, these systems ain't cheap. My relatively small Cape style home was quoted for a system for just over $20,000. At this time, the government was offering a 30% tax credit towards the cost, but I'm not sure if such a credit is still available.

Most people, myself included, like to think of such huge purchases as investments. So I created a new Excel sheet (actually an OpenOffice Calc sheet) and started attempting to calculate the break even time for my potential geothermal system. Clearly, the upfront cost is significantly higher than other systems, but the energy savings year over year will eventually break even with an alternate oil or natural gas system. After that, the system will start saving money. Easy, or so I thought. The fact is, there are some huge variables here that are almost impossible to calculate or predict.

The first is the actual energy savings. I've found geothermal companies completely unwilling to give me even a gross estimate of the predicted energy costs. The “don't worry I won't sue you if your wrong” line doesn't work here. The best I could get was a list of previous customers with claimed energy savings between 25-75%. That's a huge margin and much of it probably depends on the year. Comparing the savings to a previous year with $5 oil, for example, will show much greater savings than a year with $2.50 oil. As an Electronic Engineer, I know to use the worse case value on a components spec sheet, but using 25% savings greatly increases the length of time required to justify such an investment in comparison to say, 50% or 75%.

Another factor to consider here is the future price of oil / gas. I did some research on the Internet to find that either a) we have tons of oil and if the government just allows us to get it energy will nearly always be cheap or b) we have no oil left and anyone who says otherwise is lying, by 2030 oil will be a bazillion dollars a barrel. I frankly have no idea what the cost of oil or gas will be in the future, but my guess is it will continue to increase either due to supply issues, government mandates and programs or a combination of the two. The rate of this increase can have a large impact on the return on investment of a geothermal system. I also tend to think that going into the future, the cost of electricity will eventually start to fall with new alternate energy developments and this too improves the long term prospect of a geothermal system, though again, this is just a prediction.

One more factor to consider is the value such a system adds to a home's value. Again, there is not much information to go by here. There are so few homes with geothermal systems on the market it makes research difficult. It also seems greatly dependent on the year. On a year when oil / gas is very expensive, buyers may place much more value in a geothermal system then on a year with cheap oil / gas prices. There is also somewhat of an emotional role here, depending on the buyer. The “green” conscious among us may see such a system as very appealing for non-monetary reasons, and may be wiling to pay more because of it.

In the end, an accurate calculation is nearly impossible. I estimated a reasonable expectation of a 10 year break even point, but could see it easily being longer than that. As the home my family and I are currently living in is our starter home, it just didn't seem worthwhile. My wife getting pregnant around this time sealed our decision to hold off. I still think such a system can make sense for many out there however, and hope that for my next home (which I'll most likely live in for a long time) I am able to use such a system. In certain situations, such as when designing your own home from scratch, a geothermal system can be planned in for a much lower cost, such as if it is included as part of the well system. Combined with a solar array to produce electricity, such a system can provide for a nearly complete “of the grid” solution. I like to dream big, but think that realistically this type of setup, or something similar, will be very affordable and even popular within the next few decades.

Cable Shielding

Cable shielding is used to help shield the signal line or lines in a cable from outside interference as well as prevent the signal from radiating noise to the outside world. The shield is usually grounded at one or both sides of the cable. There are those who say you should always ground one side of a cable shield only, at both sides every time, at one side or both sides depending on the frequency of the cable's signal, or at one side with a capacitor, resistor, capacitor and resistor, or some other scheme.

This can make choosing the best shield termination scheme for a particular application difficult. It seems there are more data sources dictating to you the best action to take (or at least the author's opinion of it) without informing you to why, which is really no help at all. While there are no be all end all rules when it comes to shielding termination, understanding a few common points regarding cable shielding can help you make an educated decision for you application.

One Side or Both Sides, What's the Difference?

The main difference between grounding one or both sides of a cable shield is the type of radiation protection offered. Grounding one side of a cable's shield provides electric field protection. Holding the shield at a constant potential makes all electric field noise outside the cable invisible to the internal conductor or conductors. This shielding effect is exactly what we think of when we hear the word “shield” (think of the Star Trek enterprise shielding itself from a Klingon attack).

Single sided shield grounding, unfortunately, provides no protection to magnetic field radiation, which simply plows through our measly single sided shield. Magnetic radiation, in fact, is extremely difficult to shield in a 'blocking' manner without the use of exotic shielding techniques involving very thick shields made of magnetic material.

The best way to deal with magnetic radiation is to use its own force against itself (just as a smart Ninja would do to a larger attacking enemy). This is done by grounding both sides of the shield. Doing so allows magnetic field radiation to push current through the shield, which in turn creates its own radiation focused on the internal conductor or conductors (but in the opposite direction of the original radiation) to cancel out the impact of the noise on the signal. With only a single side grounded, however, there can be no current flow in the shield, and therefor no “shielding” effect to magnetic field radiation.

It is important to note that grounding both ends of a cable has potentially negative consequences. If there is a difference in ground potential at either side of the cable, ground loops can form. The current flowing through the shield due to the ground differential creates radiation which is injected into the cable's signal conductor or conductors, actually adding noise to the signal. So, while grounding a cable at both ends can improve things, it can also make things worse.

It is also important to note that this form of magnetic shielding only works at high frequency (typically above audio frequencies). This is why some recommend grounding one side of a cable for slow signals and both sides for high speed signals. The thinking is that as magnetic shielding is only effective at higher frequencies, there is no point trying to use it for lower frequency signals (and risk the possible negative consequences of ground loop noise). Shielding works both to protect radiation from the signal lines and to protect the signal lines from external radiation, however, so magnetic shielding may still be beneficial for low frequencies signals in certain applications.

One more note on electric field and magnetic field radiation, such radiation is typically only a “near field” (within approximately one wavelength of the emission source) concern. As either electric or magnetic radiation moves further away from the source it turns into electromagnetic radiation, which is effectively dealt with using electric field shielding techniques. This means that by routing a cable away from magnetic radiation sources the need for magnetic shielding can be reduced. Magnetic field noise radiating from the cable however, will still be an EMC concern.
Termination is Everything

When it comes to shielding, the actual connection (or termination) of the shield to ground is extremely important. The connection should be a 360 degree connection (all the way around the cable) and as low impedance as possible. Sometimes, the shield is rolled into a “flying lead” wire at the end of a cable and inserted into a terminal block type connection or tied down with a screw. This is undesirable for multiple reasons. For one, the inductance of the flying lead increases its impedance at high frequencies, killing its shielding effectiveness where it matters most. Secondly, the current flowing through the cable shield tends to bunch up towards the side of the shield that the lead is coming off near the end of the cable, effectively making the exposed or unshielded portion of the cable look bigger from a shielding perspective. Finally, the flying lead connections at the end of the cable leaves part of the signal wire or wires completely exposed without shielding. The best connection is a 360 degree connection directly to the outside of the enclosure. This ensures the entire signal wire or wires are protected and that there is no room for radiation to be carried into or out of the enclosure.

It should also be noted that as a low impedance connection is important, any scheme involving a resistor is a lost cause. Some may, for example, recommend terminating both sides of a cable, but with one side through a resistor (such as 100 Ohms) to help limit ground loops. In order for magnetic protection to work, we need to allow current flow through the shield. A resistor would limit that current flow significantly and in the process limit the shielding effect significantly.

You might be thinking, why not use a capacitor at one end to allow high frequency magnetic shielding while blocking DC ground loop currents. This is a good idea with a difficult implementation. Using a capacitor typically means using a flying lead single point connection between the shield and capacitor, the inductance of which kills the low impedance, high frequency response we were striving for. There are, however, some special (read expensive) cables out there designed with 360 degree capacitive shielding termination at one end.

If you have the money, there are also some more exotic shielding solutions as well, such as double shielded cables (featuring a shield inside a shield). In addition to shielding, there are alternate noise fighting techniques, such as using a balanced signal over a twisted pair of wires, perhaps a subject for a future entry. For now, I hope to have provided at least some insight into the basic physics behind cable shielding.