Saturday, November 30, 2013

Homework #7 - 9/5 (Makeup)

Response to readings:

     I was already familiar with the first reading assigned for today, The Magical Number Seven. Dr. Manaris spoke of it often in the classes I took with him. I remember him saying, "If you have more than 7, plus or minus 2, lines of code in your method, you are probably doing it wrong! Split it up into multiple methods!" Since learning of it, I find myself thinking of it every now and then in everyday life. For example, if I'm to grocery shopping, I won't even try to remember the list of things to get in my head if it's around that magical 7 ± 2 number. Sometimes I wonder if computer science students from other schools learn of this principle. I don't spend that much time reading other people's code, but I wonder if I did, how much of it would I find that could be broken up as Dr. Manaris said?
     The second reading for today was a study done on wireless tire pressure monitor systems (TPMS) used in passenger vehicles today. These little devices are very nifty in that they alert you of low tire pressure without you having to consistently check it yourself. They're really a neat little invention - the scary part of them that this paper reveals, is that so far, there has been little to no security measures taken to prevent 1 - the eavesdropping of signals and 2 - differentiate between fake and legitimate signal packets. The engine's computer failed in several respects in that there were several ways to tell that a packet was false. For one, you could send a packet to the ECU stating that the tire pressure is low, but at the same time, give a pressure value that is acceptable. This is a little surprising that software was developed to alert for low tire pressure, but it can't even tell what low vs normal tire pressure is! Another flaw in the ECU is that it doesn't validate authenticate messages. In other words, one could continuously bombard the ECU with signals with mixed messages until finally one gets through that would turn the light on. At first, finding out that this system wasn't very secure didn't seem like that big of a deal, but imagine that you are travelling long distance at night on a small two-lane interstate. If there are thieves following in a car behind you that have this signal replicating device, they could send a packet to your car causing the low tire pressure light to become illuminated. Now personally, if my tires were a little low on pressure, I'd probably just wait until I got to the next rest stop or gas station, but there are probably some people out there who would pull off the side of the road and bust out their little 12V air pump - game over! I can't imagine that this issue will go unchecked as the technology further develops. Eventually there will be some industry standard of security measures to be taken in terms of TPM systems.
     The final reading for the night was planning for failure in the coming (or perhaps already here!) age of cloud computing. I really enjoyed this article, it read as if it were advice from an older brother or something. The author gives great advice on how to handle failure in general. When your service fails, what do you do? Does your browser display an http error, or does the application crash and freeze the browser? Do you have a backup page to display in the event of a failure? Some of this advice seemed like common since, but some of it not so much. For example, I don't know if I would have thought to use request buffering to reduce bottlenecks in the system, or to include little "retry" segments into the parts of my code that rely on retrieving data from a foreign source. In the case the source has a hiccup, it might not serve the data when first requested, but the retry statement would allow it to get the data the second time instead of passing an error to the client application. I think I'll save this article to my bookmarks as all-around good programming advice and practices to employ when I'm employed.

No comments:

Post a Comment