Wednesday, December 4, 2013

Homework #19 - 10/24 (Makeup)


Tonight I'm going to blog about my project for my databases class. I was really enthusiastic about it and intend to keep working on it after the course is over, just for the experience. It was supposed to be a system for backyard gardeners. Users would be able to create an account with their name, email, and address. They would automatically be assigned a garden object upon their account creation. This was going to be handled by using SQL triggering on the database. The code would look something like this:

DELIMITER \\
CREATE TRIGGER addGarden  AFTER INSERT ON gardeners
BEGIN
  INSERT INTO gardens (owner_id)
  VALUES (OLD.id);
END\\
DELIMITER ;

I did not get this code to actually work, but the idea is that you have to change the delimiter from a semicolon before you write the function so that you can use the semicolon at the end of the insert statement without ending the function early. The trigger runs after an insert occurs in the gardeners table, or in other words, after a gardener has created their account. It then inserts a new garden into the gardens table with the owner_id attribute of the garden set to the OLD.id. Here the OLD means that it is using the id of the statement that caused the trigger, the gardener. I think this may be where I made my mistake, as I am not sure that the OLD.id is doing what I think it is. Then the trigger is ended, and we reset the delimiter back to the semicolon.

There is also a pre-filled database of plants containing attributes such as common name, scientific name, mature height, duration, growth rate, minimum pH, maximum pH, among others. Users will be able to select from plants from this database to add to their personal garden and will get advice on things they should do based on the plants selected, for example, how deep to plant the seed, or how far apart it should be from another seed. The system will also recommend certain plants that the user should try to grow based on the location of their address. It can do this because the database also has table containnig list of states that each plant naturally occur in, so it can do a simple query like the following to produce a suggestion list:

SELECT common_name FROM plants p
INNER JOIN naturally_occuring n
ON p.scientific_name = n.scientific_name
INNER JOIN gardeners g
ON g.state = n.state
WHERE g.state = n.state

This code seems bloated from reading it, but it basically makes a new, temporary table from the three tables, plants, naturally_occurring, and gardeners and matches the tuples accordingly. It then selects only the common name of the plants where the state the gardener lives is the same as the state that the plant naturally occurs in.

For now that is all, I intend to have more features too - perhaps reminders on when to harvest, or weather information to alert users of inclement weather that may endanger their plant.

Sunday, December 1, 2013

Homework #25 - 11/14 (Makeup)

Deliverable 4:

     I'm much more happy with my contribution to the group from this point on. I spent several hours working on this deliverable with material the other group members gave to me. I researched online to see what a professional deliverable report should look like, and borrowed elements of the layout and some formatting designs. I also feel like I have pretty sufficient technical writing skills, so when I compared my report to the ones the group previously produced, I feel as if the writing quality was just a little bit better, or at least more concise and to the point.
     Again, however, the group suffered from our division in both design and communication, and so the product was not written with a complete understanding of the overall system design. For example, in our test cases, the "Requirement" section of the JSON file was usually just a '?'. I had no idea what was even the point of including this in the file if they were all just question marks. When I was reading back through some of the older deliverable instructions, it said that each test case was to be able to be traced back to a single requirement. For me, that is what this section should have stated: which requirement is this specific test case testing for? After asking Ian about it, however, he explained it as it was supposed to be the required circumstances of the input to get the same expected result. Perhaps I still misunderstood what he said, but this is what I took away from it at least.
    I also reworked some of the previous deliverables to fit into this one and rewrote some of the wording, since at each deliverable, it stated that it was supposed to be a chapter in our final deliverable booklet. I was quite pleased with the document after I was finished with it. It really did look like a professional document. Tan whipped up some images we could use as kind of a company logo for our group, and I was able to include those as well, further adding to the convincing professionalism. Getting a good looking deliverable wasn't the only positive thing that came out of the experience, though. All of my team members complimented my work on it, and I felt like I finally did something to really contribute to the group, boosting my confidence as well. Really a great experience overall and I'm glad I was able to work on it and put so much time into it.

Homework #20 - 10/29 (Makeup)

Deliverable 3:

     To be honest, I don't even remember what was going on with my circumstances around the time of deliverable 3. My group basically functioned entirely without me for this one. I feel like I definitely dropped the ball during this part of the semester. I remember Ian showing me the code and walking us through it as a group. I also remember suggesting that we should implement a time stamp attribute to keep track of when the test cases were run. Even though our set of test cases were run basically at the same time, or at least with very little time in between each, it wouldn't matter much for our class project, but in a real world setting, information about who ran the test, and when it was run would also want to be recorded. Other than that, Ian pretty much wrote the entire framework, and Andrew wrote the entire deliverable. I'm not sure what Tan was working on at the time.
     In a sense, I suppose we divided up our group sort of like the surgical team described in the Mythical Man Month, with Ian being the main surgeon. Andrew and I were mainly bookkeepers and Tan ended up filling a dual role of the surgeon's right hand man, and also the public relations person, since he wrote the front end of our application, which would communicate the information to the client. In our case, however, the group suffered from more disorganization than anything else. Normally, the division of tasks among group members is to increase the total efficiency, but in our case it probably hindered us more than doing us any good. It is very much similar to how Brooks describes some systems being developed by large teams: programmers are usually in charge of not only the implementation, but also the design of their components of the system. In this sense, you get several components that are not designed in a unified manner. This was the case in our group as well, I think. We had basically one person writing the framework, so the others might not have fully understood the design. Unfortunately, that includes those of us who wrote the actual deliverables - so some of the information conveyed could have likely been incorrect as it was not fully understood in the first place.

After hearing that we did pretty much a horrible job on this deliverable, I committed myself to putting a lot more effort into the team and our future deliverables.

Homework #18 - 10/22 (Makeup)

Chapter 19 - Service Oriented Architecture

     This chapter brings back memories.... I interned at BMW in Spartanburg at the IT Innovations department. There was a lot of neat things going on there with several different projects being worked on by different interns. I was probably assigned the least interesting project, and also probably the hardest. Let me see if I can recall, in correct details, what I was assigned. I was working on a back end web service which would be used for an in-house app store. The web service was to take in a JSON message which contained metadata about an app and error reporting for that app. It would include the client/developer name and contact information, a time stamp that the error report was sent, the error message, and some other stuff I can't remember. The JSON message was sent via SOAP and RESTful messages to a Glassfish server that was hosting my web service. The service was then to take the JSON, parse it, and store the information in an Oracle database. Whew, I think that's the gist of it.
     The reason this was, at least what I perceived, the hardest task is because I had never had any exposure to all of the technology and standards that come along with web services. For that matter, I don't think any of my colleagues did either. I had to read about SOAP messages, RESTful services, WSDL and XML descriptor files, and other things I don't remember. It's also the first time I had seen SQL or done anything with a database, so the learning curve was huge, as you can imagine. It took me the entirety of my 5 month internship to finish what should have been a relatively simple program.
     If I had another chance to develop the same type of system, I would jump at it. Back then I didn't really have a good grasp of what things such as a WSDL file was. All that I knew is that it was necessary to have for a web service to run. Now that I realize what it actually is and it's purpose, it wouldn't be nearly as confusing if I were have to do it again. Another thing I might employ is an actual life cycle model for my project. I remember not really knowing what I was doing, and just using the code-and-fix or waterfall models. Once something broke, and I couldn't figure out what I did to break it, I would frequently just scrap the whole thing and started over from scratch - it was quite frustrating, really.

Homework #17 - 10/17 (Makeup)

Chapter 18 - Distributed Software Engineering

*After reading my fellow team members' blogs, I realize that mine isn't looking too bad. I'm not saying that their blogs are any better or worse than mine by any means, I just thought I was much farther behind than I really am, which gives me hope. We are optimists, after all, as Brooks says in the Mythical Man Month.

     This chapter is all about distributed systems, which in reality, includes most systems in use today. Even word processors now, are linked to the cloud and remotely load settings in from other sources. The author highlights some of the difficulties included in developing a distributed system:


  • Transparency - Should the system appear as a single unit, or is it sometimes useful to understand that it is, in fact, a distributed system? My intuition here, says that it is better to appear as a single unit. It just seems like a cleaner design and hides the unnecessary details from the end user. 
  • Openness - Should a system be designed using standard protocols that support interoperability or should more specialized protocols be used that restrict the freedom of the designer? Here I think it definitely varies on the system being built. If the system is something that has very specific and unforgiving requirements, then probably specific protocols should be used. If there is a little flexibility in the design of the system, perhaps then, using standard protocols to increase interoperability would be preferred. 
  • Scalability - How can the system be constructed so that it is scaleable? How can the system be designed so that it's capacity can be increased as demand increases? Cloud computing comes to mind as a good solution to this problem. If your service is getting more demand, allocate it more resources from the cloud.
  • Security - How can usable security policies be defined and implemented that apply across a set of independently managed systems?
  • Quality of Service - How should the quality of service that is delivered to system users be specified and how should the system be implemented to deliver an acceptable quality of service to all users? This is a good question - what does quality of service, in the case of your system, mean? Is it having a stellar feature set, or is it more focused on always being available?
  • Failure Management - How can system failures be detected, contained, and repaired? For distributed systems, this is a big concern. You may not have control over some of the components of the system, but when they fail, the service is either interrupted or degraded, so how do you plan for that?
     Reading this chapter has helped me increase my understanding of distributed systems, and what everyone means when they talk about things like, scalability, and quality of service, etc. I've kind of been able to identify certain parts with my databases class. We've had to develop, essentially, a distributed system of our own design over this semester. We use a MySQL database in the back end to store our entities, and their relations, and write a php or java front end to serve the data to clients in the web browser and allow them to perform basic crud operations on the data being stored in the database. I'm very grateful for my undergrad experience here at College of Charleston. I'm not sure how well other schools prepare their students for real world applications, but I feel as if the staff does a good job here by assigning relevant readings and projects.

Homework #16 - 10/10 (Makeup)

WELL - here I was about to read chapter 17 of our book, but when I start flipping through the pages, I realize that I've been reading the wrong textbook for a good portion of this semester, because the book I have been reading doesn't even have a chapter 17. This is pretty depressing....I've been reading the textbook we used for my CSCI 360 class. A lot of the material is similar, but still. I've probably done several incorrect assignments up to now. I don't know when it was that I confused our books, probably after fall break when my health started getting worse and I wasn't sleeping as well and started missing a lot of class. Ah well, what can ya do?

SO, chapter 17 of the right textbook. This chapter is all about software reuse and taking small preexisting components and composing them into a larger target software program. A 'component' can be interpereted in different ways, however. Some define it as a software element that can be independently deployed and is composed according to a composition standard. Others say that it is unit of composition with contractually-specified interfaces and explicit context dependencies. Basically, some think that a software component is only defined a component if it is written by the standards, and others define a component based on its key characteristics.

Some definitions of component characteristics to remember:

  • A component is standardized if it is used in a CBSE process and conforms to a standard component model. This can define interfaces, metadata, documentation, composition, and deployment. 
  • A component is independent if it can be composed and deployed without the use of other components
  • A component is composable if all interactions take place through a publicly defined interface.
  • A component is deployable if it is self contained and able to act as a stand-alone entity. 
  • Components must be documented so that future users can decide whether it will meet their needs or not. All of the syntax and semantics of the interface should be defined.
     The rest of the chapter goes on to define the two different types of component-based software engineering: development for reuse, and development with reuse. The first dealing with making components that will be reused in other applications and the later making applications that use existing components. 
     When writing components for reuse, you need to make sure there are no application-specific methods, names have been generalized, methods provide complete functionality, exception handling is consistent among methods, incorporate a 'configuration' interface to allow the component to be adapted to different situations, and integrate any required components so that the developed component is independent.
     When writing applications that use preexisting components, you first design the application in an outline being as general as possible to maximize the number of potentially usable components. Then, you modify requirements depending on the available components and design the architecture. Further component search and design follows based on whether the previously selected components will fit the need. Finally, after all the components have been selected and the architecture design solidified, the system is composed.


Software reuse is one of the things mentioned in "The Silver Bullet" as a way to dramatically increase software development productivity. Buying off-the-shelf software not only saves you the pain of designing, and writing it, but also the huge task of testing and debugging it. This is certainly an imaginable concept, but it would also require huge effort on the part of the developers to write their components according to a standard so that they may be compatible with other components written in the same fashion.

Homework #13 - 9/26 (Makeup)

     This reading is all about software life cycle models. Not the most interesting thing to read about, but certainly important. Just as we can model systems and their behavior with UML diagrams for a better understanding of the system and an aid in the development of the system, we, too, can model the development cycle of a piece of software. Also, un-intuitively, software life cycles can be modeled as activity-centered diagrams, where a class in the diagram represents an activity in software development (ie Problem Definition Activity, or System Development Activity). The other way life cycles can be modeled uses an entity-centered view, in which each class in the diagram represents the content and structure of the work product (ie a Requirements Specification Document).

     There are several commonly used life cycle models, most of which are activity-centered. The first, and oldest, being the Waterfall model, in which it describes a sequence of activities that occur. The requirements process leads to the design process, which leads to the implementation process, etc. The thing about the waterfall model is that there is no way to back up, to traverse up the waterfall is impossible. Because of this, the waterfall model is not used in large, costly software development. It is mainly only appropriate for small scale personal or school projects, where one can restart with little consequence. The V-model is a variation on the waterfall model in which with each activity before implementation, there is a paired activity afterward that focuses on the validation of the system through testing and client acceptance. There still is no way to go backwards in the life cycle, or re-iterate on activities. For this reason, the spiral model has become popular. This risk-analysis based model starts at an origin and spirals outward. The farther away from the origin, the greater the cost of the development thus far. Each quadrant of the model represents the type of activity that is taking place along that point in the spiral. This way, as you move along the spiral, you will iterate among the four quadrants, first determine objectives, alternatives, and constraints. Then evaluate the alternatives, identify and resolve risks. Then comes developing and verifying the next component in the system, whether it is designing, or coding. Finally, you plan for the next phase, which could be a requirements plan, or integration plan.

     I cannot say I plan on becoming a project manager at any point in my career, but learning about life cycle models is still important to me. It's all related in the end, really. For example, the spiral model can not only be used in modeling software life cycles, but it can also be used to model a design plan for your code. First, design the general layout of your code - come up with classes and methods, etc. Then write some prototypes and evaluate any alternatives, once you've decided the best way to go about it, write the code. After writing that portion of code, test and debug it before moving on to the rest of the program. This way you write in small increments and each part is tested on its own before you integrate all of the components together for the whole program.

Homework #11 - 9/19 (Makeup)

Reading through these old assignments is a little bit depressing. Not in the sense that I don't like reading them, but just realizing how many of them I've actually already read, but never wrote a blog post on.

     Seeing as I have not written very many complicated programs, when I read that about half of the development time of a piece of software is devoted to testing and debugging, I was a little surprised. Can it really be so cumbersome? The author provided many references to back up his statement of this, and it was also mentioned in the Mythical Man Month article, in which the author stated that he devotes one third of the development schedule to designing, one sixth to implementing, then the rest (half) to testing and debugging - wow! I suppose it sounds intimidating, but I'm the inexperienced one here, so I'll take their word for it!
     One of the most interesting parts of this read for me, was the different phases of thinking about testing and debugging. The author has a funny line in the article: "I called the inability to distinguish between testing and debugging 'phase 0' because it denies that testing matters, which is why I denied it the grace of a number". Well said, sir. Well said. Phase 0 thinking assures that your software will have no testing, therefore no quality assurance, and therefore, no quality. Phase 1 thinking is assuming that when your tests succeed, the software works. But in reality, this is not the case. Myers says that it only takes one failed test to prove that software doesn't work, but even an infinite number of tests cannot prove that it does work. Phase 2 thinking is where you assume the software doesn't work. In this case, the tester is always trying to prove that the code is broken, and if perhaps it is not, testing will never end, because there are no bugs to reveal - but they keep trying. Phase 3 is thinking about testing and debugging as gaining confidence in the software. A successful test does not actually improve the quality of the software, but it increases our perception that it is quality software. Therefore, when we have gained an acceptable amount of confidence in the software through testing and debugging, is when we release it. And finally, phase 4 thinking is a state of mind of testing. Developers know there will be bugs, but the main goal is to write your code with testing in mind. The act of thinking about and designing tests while programming will increase your bug prevention, and make your code easier to test when the time comes. Although all of the testing strategies strive towards the same goal of quality software, this is the best. It not only reduces the amount of bugs by using preventative measures, but also makes the labor of testing easier because the code, itself, is more testable.

Saturday, November 30, 2013

Homework #9 - 9/12 (Makeup)

Response to Mythical Man Month (Ch 1-4):

     Tonight's reading was quite interesting to say the least. It touched on many different aspects of the development life cycle of a programming project. For one, when a project is falling behind schedule, a natural reaction is to add more man-power in order to get it back on track. However, adding men to the project, in fact, has the opposite effect. Time for training the newcomers in the language, environment, and the current system design must be accounted for. Then there's the time it takes to devise a new delegation of the tasks involved in the project, so that the new people have something to do. And finally, you must include the time it takes to re-write the schedule with the new man-power included in the projection. So overall, the trade-off for adding more people to a project to get the job done faster is negative.
     I like the idea that the author credited Harlan Mills for: making programming teams that resemble surgical teams. Several small groups of people working on the same project, or component of a project, with each member having a very specific role within the group that supports the effort of the group as a whole. Only one or two main programmers and designers, and several other supporting roles are needed. This idea of a surgical team reminded me of a pair-programming video that Dr. Starr showed our CSCI 230 class several semesters ago. It stated that using pair-programming increases productivity, quality of code, and the learning and enjoyment of the programmers involved, and although I cannot say I've had much experience with pair-programming, I am inclined to agree heavily.
     Another part of this article was talking about how most programming projects are divided among developers, and each developer is also in charge of the design of his or her specific component. The author says that this is dangerous because the overall design of the system is no longer unified, and can have many parts that, while they may be well designed in themselves, are not designed to work with the other components as well. Therefore, again, the solution is to divide the design of the project from the implementation of the project. Hence where the term, code monkey, comes from. A small team of the "creative" and "elite" programmers get to have all the fun designing the system, then pass off the frustrating part of implementation to the "lesser" programmers. This idea in itself seems backwards to me as well. Why would you want the elite programmers to just design it, if they aren't going to the skills that classify them as "elite" to actually code it? Even though it may be well designed, the code would not be as efficient and reliable as it would had the elite programmers been the ones coding it in the first place. Perhaps I'm thinking about it in the wrong way. The author did say that this team architechture should only be used for large scale programming projects.

I enjoyed this read a lot and have definitely taken away from it, at the very least, that adding more people to a project that is running behind schedule is the wrong move. One cannot conquer the Mythical Man-Month because it is just that - mythical and imaginary.

Homework #8 - 9/10 (Makeup)

Response to The Future of Programming:

     I personally don't see this getting very far. They advertise programming from the browser and therefore being able to program on any device that can run a browser, or more specifically, Chrome. Sure, programming on Chromebooks might be useful, but there's no real difference to doing that compared to programming on any other conventional laptop. The other possibility that they say opens up is that you'd be able to program on your iPad! Are you kidding? I can't think of anything worse than trying to write a program on any sort of touchscreen device - unless - we're talking about drag and drop snippets of code to create a larger program, Which is what I think is the real future of programming. Before I talk about that, I want to finish with Cloud9, though. One of their other main selling points is that you can "zoom" in and out of your code to have a better idea of where you are within it. This is pretty cool, but it isn't novel by any means. It is probably something that will be a standard of any future IDE's that come out. Ian showed me the Sublime text editor, which I have fallen in love with this semester. It more or less does the same thing. Another selling point of Cloud9 would be to access your code anywhere online, but again, this isn't a very novel technology, nor does it really meet any new needs. Over the semester, we've learned how to use version control repository programs, such as Subversion, and Git. These programs, in my opinion, provide the same services that Cloud9 does and more. Not only can you save your code online, and re-download it from any other machine (although perhaps not as easily), but you can also revert to older versions of the code if you accidentally broke something, then committed it to the repository. In the end, Cloud9 to me is mainly just a kind of over-the-top showcase of what an IDE should be. It looks pretty, but doesn't provide any new functionality that previous IDEs haven't before.
     Now, where I think the real future of programming is headed is graphical programming interfaces. Much like how developing websites has become nowadays compared to when the world wide web still had its baby teeth. Before, you had to know HTML and even then, you had to spend a lot of time and effort to make a website that looked more than pathetic. Nowadays, you click on a theme, and boom, it formats your entire page for you. Adding pictures and tables? No problem, just click here. Don't like it over there? No problem, just drag it over here. The most comparable thing that is in use today that I can envision the future of programming to be like, is developing apps for Android. The Android SDK implements drag-and-drop programming with certain elements, such as radio buttons, check-boxes, and lists. All you have to do is click which type you want to insert and its done, no writing Java code necessary. The other thing that leads me to believe that this is the future of programming is how much more abstracted our languages have gotten. If you think about the progression throughout the decades it makes a lot of sense. Machine language deals with literal machine hardware instructions, with C, you can directly access memory elements at a specified location, then you move to Python or other similar languages, where you can dynamically specify the type of a variable, it is no longer static. It won't be long before languages are abstracted further to be able to accept "ideas" of how the program should work as the input, and the environment or language will translate it further for you as necessary. It's only a small step from this to graphical programming, in my opinion.

Homework #7 - 9/5 (Makeup)

Response to readings:

     I was already familiar with the first reading assigned for today, The Magical Number Seven. Dr. Manaris spoke of it often in the classes I took with him. I remember him saying, "If you have more than 7, plus or minus 2, lines of code in your method, you are probably doing it wrong! Split it up into multiple methods!" Since learning of it, I find myself thinking of it every now and then in everyday life. For example, if I'm to grocery shopping, I won't even try to remember the list of things to get in my head if it's around that magical 7 ± 2 number. Sometimes I wonder if computer science students from other schools learn of this principle. I don't spend that much time reading other people's code, but I wonder if I did, how much of it would I find that could be broken up as Dr. Manaris said?
     The second reading for today was a study done on wireless tire pressure monitor systems (TPMS) used in passenger vehicles today. These little devices are very nifty in that they alert you of low tire pressure without you having to consistently check it yourself. They're really a neat little invention - the scary part of them that this paper reveals, is that so far, there has been little to no security measures taken to prevent 1 - the eavesdropping of signals and 2 - differentiate between fake and legitimate signal packets. The engine's computer failed in several respects in that there were several ways to tell that a packet was false. For one, you could send a packet to the ECU stating that the tire pressure is low, but at the same time, give a pressure value that is acceptable. This is a little surprising that software was developed to alert for low tire pressure, but it can't even tell what low vs normal tire pressure is! Another flaw in the ECU is that it doesn't validate authenticate messages. In other words, one could continuously bombard the ECU with signals with mixed messages until finally one gets through that would turn the light on. At first, finding out that this system wasn't very secure didn't seem like that big of a deal, but imagine that you are travelling long distance at night on a small two-lane interstate. If there are thieves following in a car behind you that have this signal replicating device, they could send a packet to your car causing the low tire pressure light to become illuminated. Now personally, if my tires were a little low on pressure, I'd probably just wait until I got to the next rest stop or gas station, but there are probably some people out there who would pull off the side of the road and bust out their little 12V air pump - game over! I can't imagine that this issue will go unchecked as the technology further develops. Eventually there will be some industry standard of security measures to be taken in terms of TPM systems.
     The final reading for the night was planning for failure in the coming (or perhaps already here!) age of cloud computing. I really enjoyed this article, it read as if it were advice from an older brother or something. The author gives great advice on how to handle failure in general. When your service fails, what do you do? Does your browser display an http error, or does the application crash and freeze the browser? Do you have a backup page to display in the event of a failure? Some of this advice seemed like common since, but some of it not so much. For example, I don't know if I would have thought to use request buffering to reduce bottlenecks in the system, or to include little "retry" segments into the parts of my code that rely on retrieving data from a foreign source. In the case the source has a hiccup, it might not serve the data when first requested, but the retry statement would allow it to get the data the second time instead of passing an error to the client application. I think I'll save this article to my bookmarks as all-around good programming advice and practices to employ when I'm employed.

Homework #4 - 8/29 (Makeup)

Problem set:

11.4)

The project manager chose to use the sandwich testing method. This method is good for testing the top and bottom layers of the system in parallel. Also, there is no need to write test drivers or test stubs, since the actual system components in the top and bottom layers are being tested. The weakness of sandwich testing is that there is no unit tests for the target layer, in this case, layer II. The only times the middle layer components are tested is during the integration tests with the other subsystems.

11.7) Apply the software engineering and testing terminology from this chapter to the following terms used in Feynman's article mentioned in the introduction:
  • What is a "crack"?
  • What is "crack initiation"?
  • What is "high engine reliability"?
  • What is a "design aim"?
  • What is a "mission equivalent"?
  • What does "10 percent of the original specification" mean?
  • How is Feynman using the term "vericfication," when he says that "As deficiencies and design errors are noted they are corrected and verified with further testing"?

  • A "crack" in the turbine blade of the shuttle turbopump is called a "failure" in testing terminology. It is a deviation of the observed behavior from the specified behavior.
  • "Crack initiation" is an erroneous state - continued operation will lead to a failure.
  • Reliability is a measure of how the observed behavior compares to the specified behavior. High reliability means that the system performs how it was specified to. When he says "high engine reliability", he is talking about the engine performing as it was specified, with very low failure rate.
  • The "design aim" is the desired reliability that is specified during the design phase of development.
  • Again, the "mission equivalent" is the specified reliability. They wanted the engine to operate without failure for an amount of time that was equal to 55 missions, or "mission equivalent" of 55. In this case, that turned out to be 27,000 seconds of operation.
  • This is talking about the observed reliability. Instead of being able to operate for a total of 55 missions without failure, some parts had to be replaced every 3 or 4 missions, and others every 5 or 6. This is where 10% comes from - on average, every 4 missions (4/55 ≈ 7%, which is why he said "at most, 10%") the engine had to be repaired.
  • Feynman is describing "fault detection", which is the process of identifying erroneous states and their underlying faults. When he says the errors are "corrected and verified" he means the faults have been repaired and the new expected behavior has been tested again and proven to be sufficient.

*The homework states to do exercise 11.9, but there is no such problem listed in the back of the chapter.*

Monday, October 21, 2013

Project Update 10/21/13

     We haven't had a group chat in a while and the fact that I've missed some classes recently doesn't help either, but last time we spoke, I mentioned that I'd work on the script and webpage for the groups test results. It turns out, the only thing I had to write in the script (at least for a Windows command line script) was

    start groupwebpage.html

This automatically starts whatever default browser is set on the clients computer, and loads the specified html file. Of course, the file has to be local for this to work, but you could do it with any website as well. E.g.

    start http://www.google.com

This works the same way. I think it recognizes the http tag and automatically starts whatever program is associated with the protocol, or at least this is what I have read online. So after realizing that the script itself was very easy, I decided to go ahead and work on the webpage. I don't have much yet, but I'm going to be using css styling and hopefully some cool HTML 5 elements as well. We'll see what I can incorporate. I'm thinking something like, if all test cases are successful, display some silly .gif and play victory music in the background or something.

Tuesday, October 8, 2013

Homework #15 - 10/8 - Deliverable 2 Reflections

     Well here I am at 6:15am writing a post about my night of fun. It started out simple enough. I was just going to continue where I left off after building the project in Ubuntu. I was going to run it, see how SugarLabs looked, try and find the source code, etc. I decided, however, that running a VM inside of Windows 7 to run Ubuntu, then building a whole OS (SugarLabs) and running that inside of Ubuntu was just a little too much abstraction and my poor 3gb laptop was struggling to say the least. So Tan suggested that I just install Mint alongside Windows 7 and go from there, which I thought was a good idea. Well, hours later, after battling with my boot options trying to figure out why my computer refused to boot from any usb device, I had to break down and go to wal-mart and buy some blank discs. At about 3am I had finally finished burning an install disc and it worked like a charm. So I finish installing Mint and do all the things that I had done in Ubuntu to download Sugar and build it. Yay, success!
     Running Sugar is interesting, the mouse pointer is gigantic and the buttons are massive as well, but it is surprisingly smooth and modern, for a kids OS. When you move the pointer to any of the screen corners, it brings up borders with options, just like a Mac does - pretty nifty. The navigation and interface of SugarLabs, though, could really use some more focus on design - it's not very intuitive, and things aren't given very meaningful names. For example, I could not find the list of Activities (SugarLabs version of programs, or apps) anywhere because they were ended up being in a directory called "Pippy" with a big snake icon. I guess that's kind of fun and interesting for kids, but man is it confusing.
     Thank god for my teammates on this one, Tan was well ahead of us finding the code, etc, and Andrew was able to put together a very good looking deliverable, and although I wish I had been able to get some sleep, the night was not wasted as I learned a lot, got even more acclimated to linux and the command line, and was able to play around in our target project a little, and as a result, I feel a lot more prepared for our future endeavors.

Thursday, October 3, 2013

Homework #14 - 10/3

      The article we were to read for today was all about legacy systems and how to go about transferring from their use to a new system. Some of the reasons for transferring include being able to take advantage of new technologies that the legacy system cannot incorporate, high costs of maintaining the legacy system, and investing in the business. This whole article has much familiarity in my life. The reason being, one of my jobs is being a pizza driver, and the system we use to take orders is incredibly old. I'm not talking windows 95 old, I'm talking green-screen-monitor-and-tower-combined-with-only-keyboard-input old. This consistently blows me away, why are we stuck in the 80s? Don't ask me how they replace these dinosaurs, because they have broken before and we end up getting "new" ones, which are the exact same model. It's also strange, because this isn't some local small business either, this is an international chain corporation! That being said, the store I work for is franchise-owned, and so it may be possible that only the franchise has yet to update its technology, and the corporate stores already have. Either way, the franchise isn't that small either, so you'd think they could benefit from upgrading.
     I've often thought about how I would program a new system to take orders for our store, and what kinds of things I would do differently and/or inherit from the old system. Upgrading the legacy system for me, seems like an easy enough task, but then when you read this article and actually imagine everything involved with the legacy system and all of its different components, it quickly becomes a daunting task and I can now understand why the pizza joint has yet to upgrade. I was trying to imagine how I would go about doing it if somehow they ended up hiring me to do the job, and I would definitely prefer to use the butterfly method discussed in the article rather than the chicken little method. All the extra complexity of writing components to make the new and legacy systems able to interact just doesn't seem worth it. I wonder, though, if there are other methods not discussed in this article as it was published in 1999. They speak toward the end of the paper about the research field of migrating legacy systems and I find myself curious if there has been any and what the results of it were.

Tuesday, September 24, 2013

Homework #12 - 9/24

     I found this article an interesting read. Control flow graphs are kind of understood subconsciously by programmers, but when we actually look at them and investigate their properties, there's a lot more to them than you might think at first. One thing that was confusing to me, or at least not very intuitive, was the GEN KILL IN() OUT() sets. The GEN, IN, and OUT I understand: GEN meaning anything the line of code produces, IN consisting of all of the input variables, and OUT consisting of all of the output variables. The KILL set, however, is a little more difficult to wrap my head around. It is defined as the set of definitions that are killed if they reach the entry to the node. An example they give is the KILL set of {3 : sum, 5 : sum, 8 : sum} for the line of code sum = 0; I understand that the sum of 3 and 5 is 8 from the set (although I'm not sure if that's what is meant by the values in the set), but how did those numbers get derived for that particular line of code where it is merely setting a variable equal to zero?
     When reading about Definition-Use pairs, the first thing I thought was fields and methods of classes. Every node in the control flow graph is like a class (even though it could be just a single line of code) and its definition are the field values and its uses are the methods. It mentions an upward exposed use which it describes as a use of some variable which can be reached by a definition of the variable that reaches the node. This is a little confusion to me, it almost seems ambiguous. What I grasp from it is when a variable is defined, and then used in a child node, the use in the child node is considered upward exposed.
     Program paths are another one of those intuitive subjects in the article. My understanding is that any number of consecutive nodes where the output of node n leads to the input of node n+1 is a path (or subpath). A program path is complete if the starting node of the path is the starting node of the whole program and likewise the ending node is the exit node of the program. Definition-clear paths are something I never thought of, but easy enough to understand. A path is definition-clear with respect to a variable v if it does not depend (contain, basically) on v. Also, the article talks about infeasable paths, which are basically just any path that cannot be executed regardless of input of the program. An example would be:

     String myName = "";
     if(false) {
          myName = "Justin";
     }

The variable myName will never be set to "Justin" no matter what code comes before or after it, therefore that program path is infeasible.
     All of this material seems familiar or intuitive to us as programmers, because as seen in the code above, we use them regularly, we have just been unaware of it and their formal definitions until now.

Monday, September 16, 2013

Homework #10 - 9/17

Response to readings

     While the readings for today were not the most exciting, I can see why they were important. We have many UML diagrams to explain different parts of a computer software project, from class diagrams which describe the actual classes in the code, to sequence diagrams that describe how different actors in the program or environment communicate with each other, but there's so many different types of diagrams and they can be too cumbersome at times, I feel. The "higraphs" that the author introduces seems to solve the issue of being able to describe nearly any type of diagram or graph with one standard, allowing for dynamic meanings of edges, directed or non-directed, and also modular definitions of the "blob" which represents a set of elements. A blob within a blob could represent the entire set of the outer blob, or not, its meaning adjusts to whatever you want it to be. I also really like the idea of being able to express the Cartesian products of sets with higraphs. I happen to be in the databases class this semester and certain queries we've learned will calculate the Cartesian product of two tables, then apply the filter to select only the tuples of interest, so being able to express this visually without having to do the calculations is useful, as with large sets, the product can get out of hand very quickly. The only issue I have with these higraphs is because they are so versatile, you may have to explain the graph to viewers, and it may not be as intuitive to understand as some of the more traditional diagrams. 

     Aside from the readings, I'd also like to talk about my experience selecting candidates for my group's open-source project. We met on Saturday to get a good first idea on what project to do, but I feel like we were pretty distracted by all of the other roommates present. Only two of us had our computers with us, so while I browsed the H/Foss list, another group member was looking at projects on sourceforge.com. He suggested a few gaming related programs that are being developed and put them on the list, which we would later vote on. After what turned out to be a mostly unproductive meeting, we decided that each member would go home, find another project of their choosing to add to the list, and try and compile/run them. I went home and installed a bunch of software on my laptop - a VM program, eclipse for C/C++ and Java EE, as well as some other programs I thought I would need, and attempted to compile some of the source code of a couple of the game related programs. I'm not sure if it's just me, or if the eclipse IDE is complicated, but I couldn't get anything to compile and run. I was greeted with build path errors, project names not matching files, and other issues as well. So I threw in the towel and searched for the program I would contribute to the list. While I do like games, I felt that we should focus on something more meaningful, and that would look better on a resume, so I suggested a music composition/sheet music program. We'll see what the group thinks soon enough.

Thursday, September 5, 2013

Homework #6 - 9/5

Problem Set:

4.5) Using the technique suggested here, where natural language descriptions are presented in a standard format, write plausible user requirements for the following functions:
  • An unattended petrol (gas) pump system that includes a credit card reader. The customer swipes the card through the reader then specifies the amount of fuel required. The fuel is delivered and the customer's account debited.
  • The cash-dispensing function in a bank ATM
  • The spelling-check and correcting function in a word processor.


4.6) Suggest how an engineer responsible for drawing up a system requirements specification might keep track of the relationships between functional and non-functional requirements.

Functional requirements should be able to be tested by specific code then determined whether or not the prototype fulfills the requirements or not. Non-functional requirements are usually some sort of constraint imposed on the system, for example, the code must be written in Java.


4.7) Using your knowledge of how an ATM is used, develop a set of use cases that could serve as a basis for understanding the requirements for an ATM system.

Tuesday, September 3, 2013

Homework #5 - 9/3

Responses for readings for 8/29 and 9/3

     In homework #2, we were asked to discuss whether or not we think programmers should be certified professionals like doctors and lawyers. I held the position that we should not, but after reading some of these articles, I have rethought my take on this subject. In the article about the Therac-25 incidents, the authors say that "more events like those associated with the Therac-25 will make such certification inevitable". I have to agree at least in the case of safety-critical systems and perhaps systems that deal with sensitive data as well, where security is essential. Of course, one could argue that nearly all modern applications fit these categories - apps are getting more and more complicated and requesting access to all sorts of personal information on our phones, and even becoming integrated in our homes and cars. I think software engineers in these fields should undergo extra training on how to handle sensitive data as well as implement safety features in their programming. The Therac-25 case unfortunately held partial blame on simple programming mistakes, such as implementing a variable by one and eventually causing an overflow condition instead of setting the variable to a non-zero value.
     Another focus of the readings was to have thorough requirements outlined before starting a project. The main example of this is the FBI case tracking system, VCF. It is hard to believe that such a massive software development failure could have happened so recently. The major time constraints put on the project as well as focusing on small details of the final product, such as where a button should be located in the interface, instead of the overall functionality of the program set it up for failure from the beginning. It is also disturbing that the FBI was going to do what's called a 'flash cutover' where you replace a legacy system with a totally new one in one fell swoop and cannot revert if there are any failures. This was all exacerbated by the 9/11 event as well as several staff and management turnovers during the development process.  
     Another common theme I noticed was about complacency. Software from previous versions was reused and it was assumed to work correctly when that was not necessarily the case. The Therac-20 contained some of the same software bugs as the Therac-25, but the earlier machine had hardware fallout systems in place to prevent overdose even if the software had a failure. This resulted in many blown fuses, but prevented potentially fatal overdoses of radiation. This is also observed in the Ariane 5 incident, where software from the Ariane 4 was reused and some functions that were already implemented were left in, even though the newer version had no use for them. The software "was assumed to be correct until it was
shown to be faulty" as the accident report points out. Unfortunately, as projects have limited budget, it seems that potential risks get downplayed and do not receive as much attention as they should. Although it shouldn't be this way, perhaps it is up to the developers to ensure they're doing everything they can to practice safe programming and to avoid unnecessary complication when writing software.

Monday, September 2, 2013

My Subversion Experience

I'm not too familiar with command line executions in windows, so you can imagine that getting started with subversion was a little difficult. After downloading subversion 1.7 from the website and extracting the zip file, I was sadly distraught not to find an "install" file to make my life easy. I read some quick start tutorials on the website and couldn't seem to find a straight answer on how to really get started. Luckily, I read that if you are unfamiliar with command line executions, there are several gui clients available online for free and you need only search for them. So, I Googled "svn gui client" and ended downloading the top one from the list: SmartSVN. This is a super easy and intuitive program to use. Extract the download file (that they email you) and use the setup file. While setting up, it asked if I wanted help setting up a repository, or already had on set up. I chose the later option, as we have a class repository. After installation was finished, it asked for the repository location. I simply copy/pasted the url for our class repository, and it found it and asked for my login and password. After inputting my password, created a local directory of our playground repository on my machine. All I had to then do was create a new folder, called "Wooton", within windows. I added a txt file as well, which only contained the text "Test Doc". These changes showed up in the GUI client and I was then able to use the "commit" button at the top to basically add my local changes to the repository and thus, my directory was created. I refreshed the website and sure enough, my name and txt file showed up, yay! So, my svn directory is from now on https://svn.cs.cofc.edu/repos/CSCI362201301/playground/Wooton/. That wasn't so bad, yeah?

Monday, August 26, 2013

Homework #3 - 8/27

Problem set:

10.6) A multimedia virtual museum system offering virtual experiences of ancient Greece is to be developed for a consortium of European museums. The system should provide users with the facility to view 3-D models of ancient Greece through a standard web browser and should also support an immersive virtual reality experience. What political and organizational difficulties might arise when the system is installed in the museums that make up the consortium?

The fact that the system would be used across multiple countries in Europe brings up several problems. The system would have to comply to several different governmental regulations and standards. Where would the data be hosted? Which country should have the database of the models? Which country should pay to develop the system? Who should be in charge of maintenance? Should the responsibility be divided across multiple countries? Would it make intercommunication more difficult? It would need to support multiple languages. The list could go on...

10.10) You are an engineer involved in the development of a financial system. During installation, you discover that this system will make a significant number of people redundant. The people in the environment deny you access to essential information to complete the system installation. To what extent should you, as a systems engineer, become involved in this situation? Is it your professional responsibility to complete the installation as contracted? Should you simply abandon the work until the procuring organization has sorted out the problem?

I would advise my managers of the situation as needed, but otherwise, stay out of it. Sure, it is my professional responsibility to complete the installation, but if someone is denying me essential information, I obviously cannot continue, and I obviously am not going to escalate things by attempting to force it from them.

Homework #2 - 8/27

Responses for "No Silver Bullet", "Kode Vicious", and "Software Analytics: So What?":

     First of all, "No Silver Bullet" was probably my favorite article of the three because the author made his case in such a systematic and thorough fashion. He begins with talking about the growth and advancement of computer hardware in tandem with its reduction in cost during the past few decades and how the transistor along with other electronics and organizational methods made that progress possible. It was the one thing that solved all of the problems, a kind of "silver bullet" for the obstacles of growth at the time. He then points out that there is no such silver bullet for software development progress and likely will not be in the future. There are many reasons that he points out, but the main one that stuck with me was complexity. He says "Software entities are more complex for their size than perhaps any other human construct...". I always thought this was the case, somehow, but now seeing someone else point it out, and give reason why totally validates what I thought, sometimes, to be self-pity. The author has several other attributes of software that make it unlikely to see the same progress as hardware, such as there not being any good way to visualize your product, unlike anything made in the physical world, where you can have blueprints, measurements, and scale prototypes. He then goes on to talk about how the progress software development has seen is due to the solving of the "accidental difficulties". This includes the advent of the high-level programming language, saving time and increasing productivity by abstracting the language from the machine instructions. Of course, with most things in this article, there is a negative side to the subject, as the author points out the limits of the different strategies of increasing productivity.
     The amazing thing about this article is that in spite of all of these difficulties and the limits of the past breakthroughs, the author hold out hope for progress. He says that creating a kind of software development repository, where programmers can reuse code from other projects to apply to theirs. This kind of situation would require the collaboration of many companies and government departments to pitch in and add to the bank of code. While I do think this would greatly increase programmers' productivity, I don't see it happening for a while. To a degree, I'm glad though, because it would mean less programming jobs and probably reduced pay as well. Another strategy that the author points out would be to grow code instead of building it. This entails building small portions at a time to get them working and tested, then add to it incrementally, making sure each part is working as you go. I really like this idea, and am going to try and employ it as much as I can while coding, as it prevents you from doing a bunch of coding and having to track down the bug when finally testing it. Finally, the author says that we must find a way to turn any software designer into a great software designer, as the difference between the two is a big one, and if it were possible to make all designers great ones, we would significantly increase our overall productivity. He points out that great programmers and managers are equally rare, but, amusingly, he doesn't see companies spending as near as much effort in finding and cultivating great programmers.  I'm all for this one as well, it makes a lot of since to strive towards this goal. There may be no "silver bullet", but I agree with everything professor Brooks has presented in this essay and am looking forward to the bumpy road ahead. 

     Well, there's 500 words already....but we'll continue on. The next article, "Kode Vicious", is more of an editorial response to readers' questions. The author points out some great practices to use while programming, though. The first is about knowing when to just quit and instead do something else, which may be any number of things. I have run into this feeling several times over my scholastic career, and the most common thing I do is to start fresh, or reconsider my approach to the problem. Hopefully, after taking this course, I will have become better at taking the right approach from the start. The other programming practice the author points out is to use the scientific method while fixing bugs. Document each bug, give a theory of why you think it is happening, possible solutions, and whether or not the theory was proved, or disproved after the fix was applied. While this sounds like a great idea to use when programming in the workforce, I don't see myself using it as a student. The programs we write are usually too small or simple to require such meticulous practices. I just hope I can remember and utilize it after I graduate.

     The final article, "Software Analytics: So What?", seems a little more abstracted than the others and is a little harder to grasp. From what I understand, software analytics is basically the process of mining software development data, either past or present, to improve the quality of the software currently being developed. This means things like looking at patterns in old projects to see obstacles they encountered and how they solved them. This is very similar to one of the proposed solutions from the first article on how to increase software development productivity, a sort of global repository on solutions to common problems. One of the easier points to understand that the article makes is that if you have a commonly asked question, you should build a toolkit to address it; If you have an infrequent question, you should deploy a data scientist before deploying any toolkit. I will be interested to see what software analytics evolves into and how it will aid software development in the future.

Wednesday, August 21, 2013

Homework #1 - 8/22

Problem set:

1.3) What are the four important attributes that all professional software should have? Suggest four other attributes that may sometimes be significant.

The four main attributes all professional software should have are:

  • Maintainability - The software should be written in a fashion that it may be easily modified to meet any future needs.
  • Dependability and security - This software should not cause any physical or economic damage in the event of a failure and it should not allow malicious users to access or damage the system.
  • Efficiency - The software should not waste system resources and should be responsive.
  • Acceptability - The software has to be accepted by the customer for which it was designed.
1.8) Discuss whether professional engineers should be certified in the same way as doctors or lawyers.

I am somewhat torn on this subject, but I think I'm leaning towards no. The reason being, is that lawyers or doctors don't really do much testing in their related fields. Software engineers provide their skills to build a product, but then they must rigorously test and debug it. The medication strategies that doctors use to treat patients are surely not experimental. Nor are the rules a lawyer must follow in order to have a valid case. This means that software engineers are really kind of the researchers that precede the medication strategies, or the senate and congressman who pass laws before they are in use. I may be wrong here, but I don't think congressman are certified to do their job.

1.9) For each of the clauses in the ACM/IEEE Code of Ethics shown in Figure 1.3, suggest an appropriate example that illustrates that clause.


  • Public - An example of this clause would be a software engineer pointing out flaws that would allow malicious users to infiltrate a camera system to monitor traffic.
  • Client and employer - A software engineer, say for a health monitoring system, will not publish or take the patient data for personal use.
  •  Product - The software engineer should not skip any testing or release an unstable version of the product.
  • Judgment - If the engineer thinks they see a design flaw in the product, they should bring it up with the supervisors early instead of later in the development process.
  • Management - Team leaders should not encourage engineers to cut corners and sacrifice performance or security in order to meet a deadline.
  • Profession - If an engineer were to intently submit a program with a bug in it that caused significant repercussions, imagine the distrust it would create for other clients looking to hire a developer.
  • Colleagues - If a colleague were to suggest a solution to a problem, or offer a piece of code, the software engineer should recognize them instead of taking credit for it.
  • Self - As new hardware, technologies, and developing methods become available, the developer should try and learn and familiarize themselves with it. They should also keep any certifications they do have up to date.
1.10) To help counter terrorism, many countries are planning or have developed computer systems that track large numbers of their citizens and their actions. Clearly this has privacy implications. Discuss the ethics of working on the development of this type of system.

This scares me a lot. If you're on the development team, it's not a matter of just collecting statistical data and keeping the user specifics hidden like in, say, obesity in children. The whole point of a project like this would be to know exactly who is doing what kind of activities. I think there is a lot of room for error in a project like this. I feel it would lead to even more racial and economical profiling. What happens if they're wrong when they call someone a terrorist? That person goes to prison for nothing? Or perhaps they send them to a secret "correctional" facility? There has to be some other way of preventing terrorism that doesn't invade our privacy so much. After all, what good is a right to privacy if we don't exercise it? We almost may as well not have one.

Introduction

Hello world, and welcome to my blog! Well, let's be honest, at most maybe my classmates will view this... but maybe it will be of some help to you. This is going to be my blog for CSCI 362 - Software Engineering. Here I will post things related to the course from the reading, homework, and occasionally other random things I find interesting.

Let me introduce myself before starting the first homework, though. My name is Justin and I've been attending the college since 2009, and plan on graduating in the Spring of 2014. The five year plan isn't too uncommon, right? Like most of you, I am majoring in Computer Science, and am aiming for a Bachelor of Science with a lab science focus in physics. My interests include cars, computers, music, some games, and being a generally handy kind of guy. I like to take stuff apart, and build other stuff, and fix broken stuff. I also have hermit crabs. I think we should end it there.