Saturday, November 30, 2013

Homework #9 - 9/12 (Makeup)

Response to Mythical Man Month (Ch 1-4):

     Tonight's reading was quite interesting to say the least. It touched on many different aspects of the development life cycle of a programming project. For one, when a project is falling behind schedule, a natural reaction is to add more man-power in order to get it back on track. However, adding men to the project, in fact, has the opposite effect. Time for training the newcomers in the language, environment, and the current system design must be accounted for. Then there's the time it takes to devise a new delegation of the tasks involved in the project, so that the new people have something to do. And finally, you must include the time it takes to re-write the schedule with the new man-power included in the projection. So overall, the trade-off for adding more people to a project to get the job done faster is negative.
     I like the idea that the author credited Harlan Mills for: making programming teams that resemble surgical teams. Several small groups of people working on the same project, or component of a project, with each member having a very specific role within the group that supports the effort of the group as a whole. Only one or two main programmers and designers, and several other supporting roles are needed. This idea of a surgical team reminded me of a pair-programming video that Dr. Starr showed our CSCI 230 class several semesters ago. It stated that using pair-programming increases productivity, quality of code, and the learning and enjoyment of the programmers involved, and although I cannot say I've had much experience with pair-programming, I am inclined to agree heavily.
     Another part of this article was talking about how most programming projects are divided among developers, and each developer is also in charge of the design of his or her specific component. The author says that this is dangerous because the overall design of the system is no longer unified, and can have many parts that, while they may be well designed in themselves, are not designed to work with the other components as well. Therefore, again, the solution is to divide the design of the project from the implementation of the project. Hence where the term, code monkey, comes from. A small team of the "creative" and "elite" programmers get to have all the fun designing the system, then pass off the frustrating part of implementation to the "lesser" programmers. This idea in itself seems backwards to me as well. Why would you want the elite programmers to just design it, if they aren't going to the skills that classify them as "elite" to actually code it? Even though it may be well designed, the code would not be as efficient and reliable as it would had the elite programmers been the ones coding it in the first place. Perhaps I'm thinking about it in the wrong way. The author did say that this team architechture should only be used for large scale programming projects.

I enjoyed this read a lot and have definitely taken away from it, at the very least, that adding more people to a project that is running behind schedule is the wrong move. One cannot conquer the Mythical Man-Month because it is just that - mythical and imaginary.

Homework #8 - 9/10 (Makeup)

Response to The Future of Programming:

     I personally don't see this getting very far. They advertise programming from the browser and therefore being able to program on any device that can run a browser, or more specifically, Chrome. Sure, programming on Chromebooks might be useful, but there's no real difference to doing that compared to programming on any other conventional laptop. The other possibility that they say opens up is that you'd be able to program on your iPad! Are you kidding? I can't think of anything worse than trying to write a program on any sort of touchscreen device - unless - we're talking about drag and drop snippets of code to create a larger program, Which is what I think is the real future of programming. Before I talk about that, I want to finish with Cloud9, though. One of their other main selling points is that you can "zoom" in and out of your code to have a better idea of where you are within it. This is pretty cool, but it isn't novel by any means. It is probably something that will be a standard of any future IDE's that come out. Ian showed me the Sublime text editor, which I have fallen in love with this semester. It more or less does the same thing. Another selling point of Cloud9 would be to access your code anywhere online, but again, this isn't a very novel technology, nor does it really meet any new needs. Over the semester, we've learned how to use version control repository programs, such as Subversion, and Git. These programs, in my opinion, provide the same services that Cloud9 does and more. Not only can you save your code online, and re-download it from any other machine (although perhaps not as easily), but you can also revert to older versions of the code if you accidentally broke something, then committed it to the repository. In the end, Cloud9 to me is mainly just a kind of over-the-top showcase of what an IDE should be. It looks pretty, but doesn't provide any new functionality that previous IDEs haven't before.
     Now, where I think the real future of programming is headed is graphical programming interfaces. Much like how developing websites has become nowadays compared to when the world wide web still had its baby teeth. Before, you had to know HTML and even then, you had to spend a lot of time and effort to make a website that looked more than pathetic. Nowadays, you click on a theme, and boom, it formats your entire page for you. Adding pictures and tables? No problem, just click here. Don't like it over there? No problem, just drag it over here. The most comparable thing that is in use today that I can envision the future of programming to be like, is developing apps for Android. The Android SDK implements drag-and-drop programming with certain elements, such as radio buttons, check-boxes, and lists. All you have to do is click which type you want to insert and its done, no writing Java code necessary. The other thing that leads me to believe that this is the future of programming is how much more abstracted our languages have gotten. If you think about the progression throughout the decades it makes a lot of sense. Machine language deals with literal machine hardware instructions, with C, you can directly access memory elements at a specified location, then you move to Python or other similar languages, where you can dynamically specify the type of a variable, it is no longer static. It won't be long before languages are abstracted further to be able to accept "ideas" of how the program should work as the input, and the environment or language will translate it further for you as necessary. It's only a small step from this to graphical programming, in my opinion.

Homework #7 - 9/5 (Makeup)

Response to readings:

     I was already familiar with the first reading assigned for today, The Magical Number Seven. Dr. Manaris spoke of it often in the classes I took with him. I remember him saying, "If you have more than 7, plus or minus 2, lines of code in your method, you are probably doing it wrong! Split it up into multiple methods!" Since learning of it, I find myself thinking of it every now and then in everyday life. For example, if I'm to grocery shopping, I won't even try to remember the list of things to get in my head if it's around that magical 7 ± 2 number. Sometimes I wonder if computer science students from other schools learn of this principle. I don't spend that much time reading other people's code, but I wonder if I did, how much of it would I find that could be broken up as Dr. Manaris said?
     The second reading for today was a study done on wireless tire pressure monitor systems (TPMS) used in passenger vehicles today. These little devices are very nifty in that they alert you of low tire pressure without you having to consistently check it yourself. They're really a neat little invention - the scary part of them that this paper reveals, is that so far, there has been little to no security measures taken to prevent 1 - the eavesdropping of signals and 2 - differentiate between fake and legitimate signal packets. The engine's computer failed in several respects in that there were several ways to tell that a packet was false. For one, you could send a packet to the ECU stating that the tire pressure is low, but at the same time, give a pressure value that is acceptable. This is a little surprising that software was developed to alert for low tire pressure, but it can't even tell what low vs normal tire pressure is! Another flaw in the ECU is that it doesn't validate authenticate messages. In other words, one could continuously bombard the ECU with signals with mixed messages until finally one gets through that would turn the light on. At first, finding out that this system wasn't very secure didn't seem like that big of a deal, but imagine that you are travelling long distance at night on a small two-lane interstate. If there are thieves following in a car behind you that have this signal replicating device, they could send a packet to your car causing the low tire pressure light to become illuminated. Now personally, if my tires were a little low on pressure, I'd probably just wait until I got to the next rest stop or gas station, but there are probably some people out there who would pull off the side of the road and bust out their little 12V air pump - game over! I can't imagine that this issue will go unchecked as the technology further develops. Eventually there will be some industry standard of security measures to be taken in terms of TPM systems.
     The final reading for the night was planning for failure in the coming (or perhaps already here!) age of cloud computing. I really enjoyed this article, it read as if it were advice from an older brother or something. The author gives great advice on how to handle failure in general. When your service fails, what do you do? Does your browser display an http error, or does the application crash and freeze the browser? Do you have a backup page to display in the event of a failure? Some of this advice seemed like common since, but some of it not so much. For example, I don't know if I would have thought to use request buffering to reduce bottlenecks in the system, or to include little "retry" segments into the parts of my code that rely on retrieving data from a foreign source. In the case the source has a hiccup, it might not serve the data when first requested, but the retry statement would allow it to get the data the second time instead of passing an error to the client application. I think I'll save this article to my bookmarks as all-around good programming advice and practices to employ when I'm employed.

Homework #4 - 8/29 (Makeup)

Problem set:

11.4)

The project manager chose to use the sandwich testing method. This method is good for testing the top and bottom layers of the system in parallel. Also, there is no need to write test drivers or test stubs, since the actual system components in the top and bottom layers are being tested. The weakness of sandwich testing is that there is no unit tests for the target layer, in this case, layer II. The only times the middle layer components are tested is during the integration tests with the other subsystems.

11.7) Apply the software engineering and testing terminology from this chapter to the following terms used in Feynman's article mentioned in the introduction:
  • What is a "crack"?
  • What is "crack initiation"?
  • What is "high engine reliability"?
  • What is a "design aim"?
  • What is a "mission equivalent"?
  • What does "10 percent of the original specification" mean?
  • How is Feynman using the term "vericfication," when he says that "As deficiencies and design errors are noted they are corrected and verified with further testing"?

  • A "crack" in the turbine blade of the shuttle turbopump is called a "failure" in testing terminology. It is a deviation of the observed behavior from the specified behavior.
  • "Crack initiation" is an erroneous state - continued operation will lead to a failure.
  • Reliability is a measure of how the observed behavior compares to the specified behavior. High reliability means that the system performs how it was specified to. When he says "high engine reliability", he is talking about the engine performing as it was specified, with very low failure rate.
  • The "design aim" is the desired reliability that is specified during the design phase of development.
  • Again, the "mission equivalent" is the specified reliability. They wanted the engine to operate without failure for an amount of time that was equal to 55 missions, or "mission equivalent" of 55. In this case, that turned out to be 27,000 seconds of operation.
  • This is talking about the observed reliability. Instead of being able to operate for a total of 55 missions without failure, some parts had to be replaced every 3 or 4 missions, and others every 5 or 6. This is where 10% comes from - on average, every 4 missions (4/55 ≈ 7%, which is why he said "at most, 10%") the engine had to be repaired.
  • Feynman is describing "fault detection", which is the process of identifying erroneous states and their underlying faults. When he says the errors are "corrected and verified" he means the faults have been repaired and the new expected behavior has been tested again and proven to be sufficient.

*The homework states to do exercise 11.9, but there is no such problem listed in the back of the chapter.*