Tutorials - Testing and Process Recommendations for Software Engineering

Return to Part One

Bug tracking
Bug identification and tracking can be a controversial subject in a small organization. On the one hand, engineers may not like it when others point out errors in their code. (Or, to put it another way, they may worry that records of their mistakes may be used against them in subsequent performance reviews.) On the other hand, recording bugs helps make sure that a bug, once discovered, is not forgotten. It’s a delicate balancing act.
 
It is recommended that a small organization’s first efforts to track bugs should be extremely informal. One specific way to do this might be to implement internal newsgroups for the discussion of bugs related to the project(s).
 
So, for example, if OurCompany.com is working on a project called Project1, you might want to make two internal newsgroups in which you can discuss issues relating to that project:
 
oc.project1 -- for discussion relating to Project1
oc.project1.bugs -- for discussion of bugs found in Project1
 
A bug report might then be a message to the bugs newsgroup that looks something like this:
 
Subject: BUG: [jsmith] Tab order in customer-data screen is broken
When you go to the Customer Data screen, click in any of the text fields, and hit the tab key repeatedly, the order in which the fields get the keyboard focus jumps all around the screen, rather than progressing through the screen in an orderly fashion.
 
The subject line indicates that this is a new bug report, that the reporter thinks that jsmith is the person who will need to fix it, and includes a short description of the bug.
 
Discussion of the bug ticket should be done with the ‘reply’ mechanism, so that the discussion of each bug is segregated into its own thread.
 
Then, when jsmith fixes the bug, he should reply to a message in this bug thread and change the subject line:
 
Subject: FIX: [jsmith] Tab order in customer-data screen is broken
I found and fixed the problem. See MyProject/src/customer/foo.asp. The fix is in the central repository as of 2pm today.
 
By using the subject field this way, the discussion of a bug will look like this:
Mary Jones    BUG: [jsmith] Tab order in customer-data screen is broken
John Smith    :Re: BUG: [jsmith] Tab order in customer-data screen is broken
Mary Jones    Re: Re: BUG: [jsmith] Tab order in customer-data screen is broken
John Smith    FIX: [jsmith] Tab order in customer-data screen is broken
 
Anyone reading these discussions can easily see which bugs have been fixed and which ones haven’t.
 
It is predicted that, some time after the organization is trained to use internal newsgroups this way, people will want to answer additional questions about bugs:
  • How many bugs are open against the current project?
  • What bug should I be working on first?
  • Did John Smith’s fix to this bug really fix the problem?
  • How many bugs have we been finding against this project, both now and at various times in the past?
Be aware that a commercial bug-tracking system will make it easier to answer those questions - but at some cost. Everyone in the organization has to be consistent in how they report and track bugs. It often takes a little longer to report, comment on, and move bugs around in the system than with this newsgroup method. So it is recommended that you wait until you are sure you are spending too much time manually sifting through newsgroup discussions before you go looking for a commercial bug-tracking solution. By the time you know you are ready for a commercial bug tracking system, you will know how to identify one that suits your needs (or at least how to start looking).
 
Written Test Scripts and Test Automation
Finding bugs often begins as a time-consuming and haphazard process. The easiest (and, in some ways, best) way to find bugs is to have someone use the program exactly as your end users will. Each engineer should run the product in his or her own area after making any significant changes before committing the results to the central repository. Engineering organizations soon learn that testing is a costly but unavoidable part of the process. You must do a lot of testing because usually your organization does not fully appreciate how "interconnected" everything in the product is. Changes made in one directory or module may not seem to have effects elsewhere, when they actually do. Unexpected inter-relationships among modules may result in bugs popping up in unexpected places.
 
An organization’s first response should be to have a periodic review of the version of the product in the central repository, where engineers (and maybe even other people in the organization) play around with the product in order to find and report bugs.
 
After you go through this stage a few times, you will probably notice that "coverage" is haphazard. Bugs can lurk around for quite a while before being discovered. The same testing tasks may be repeated many times before other tasks are attempted the first time.
 
One way to make this "manual" testing effort more efficient and effective is to write down "scripts" that are to be periodically executed against the software. These test scripts should be placed under source code control, just like any other file associated with the project. The test script should be written in plain English such that someone who is computer-aware but not necessarily a developer can understand and carry it out.
 
For example, consider a simple web page where there are two fields and an "OK" button. The intention is that the users will type a number into each of the text fields and then hit the "OK" button. This should cause another page, which says "the sum of your numbers is (whatever)", to load.
 
The test script might start out like this:
1. For each of the following combinations below, enter the first string in the first field, and then the second string in the second field, and then click OK. Make sure that the next page shows the correct sum.
a. 3 1
b. 1.2 3.
c. .3 0.2
d. -4 -.3
 
2. For each of the following combinations below, enter the first string in the first field, and then the second string in the second field, and then hit OK. Make sure that an error dialog comes up saying "Illegal value" and that you can dismiss that error dialog by hitting its OK button.
a. a 3.0
b. 1.2 #
c. 0 ‘
d. --4 2
e. 3 3..
f. (blank) 4.0
g. 2.1 (blank)
h. (blank) (blank)
 
It might then go on to include test cases for more complicated cases, tab ordering, hitting forward and back on the browser, etc.
 
There are many reasons to produce and use test scripts like this:
  • Written test scripts can be reviewed for ideas that can apply to other test scripts.
  • Multiple people can more easily discuss and cooperate in testing when a significant portion of the test work is written down.
  • Members of your organization who are not programmers can run test scripts and learn how to write very good test scripts.
Note that even when you have test scripts, you should encourage people to do some "free play" with the product during test cycles. Encourage people to keep some kind of record of what they’re doing. Then bugs that are discovered can be more readily reproduced and isolated, and the techniques used to uncover those bugs can make their way into subsequent versions of test scripts. "Free play" testing is very valuable.
 
After the organization has produced good test scripts and developed the habit of performing periodic tests, it will become evident that some (or many) of the tests had great value in their first few iterations, but are much less likely to actually discover bugs later. You can decrease the rate at which you run those tests, so as not to spend more time (and money) running those tests than they’re worth. Another way to attack this problem is by automating those tests.
 
Test automation is a fairly sophisticated technique. It is strongly suggested that your organization go through a few projects with increasingly rigorous manual test scripts before you embark on any kind of significant test automation plan. You need to understand many things about your engineering process before you can make a good decision about test automation.
 
It is suggested at this point that, when the time comes to consider test automation, you think along the lines of purchasing a commercial capture-and-playback test automation tool (like Segue’s QAPartner/SilkTest package) to automate the rigorous manual test scripts that you have developed. By the time it is a good business idea to do this test automation, you will recognize the need and will have had plenty of practice with engineering process improvement. Do your test automation slowly and carefully!
 
Written Test Results
Each time someone carries out a test script, or performs some free form testing, he or she should write down what script was run, what version of the product it was run against, and what the results were. If you have implemented the simple newsgroup model discussed above, then a test results report might look something like this:
 
Newsgroup: oc.project1
Subject: Test Results - User Data Script 5/13/00 vs. Project1 v1.2.63
 
I ran the User Data script dated 5/13/00 on Project1 version 1.2.63 (the version in the central repository as of today at 3pm).
 
All tests passed except 2c. (the single quote test). I checked the bugs newsgroup and saw that there is an open bug on this already so I didn’t file a new one.
 
Note the distinction between the test script and the test results report. The script should be submitted to the central repository and versioned the same as source code. The test results reports can be sent to the project newsgroup.
 
These test results reports help engineers reproduce and isolate bugs by eliminating any possible confusion over when a particular bug was first discovered.
 
Coding and Commenting Guidelines
Standardized guidelines ensure that code and documentation is uniform in appearance, concept, and presentation, regardless of who actually does the programming. In addition, guidelines truly add professionalism to the coding process.
 
A set of coding guidelines should be defined for all of the computer languages and technologies that are used inhouse. It is recommended that the guidelines be arrived at by mutual agreement with all the programmers. Equally important, is a standardized format for commenting code. Indeed, uncommented code could prove to be a potential financial liability in the future, while properly commented code will make it easier for developers to become acquainted with legacy code or code created by other programmers.
 
For development environments/languages that offer it, automated formatting tools can be purchased to help in the application of the standards.
 
Code Reviews
It is recommended that code reviews be performed immediately after each significant program module or function is created. Ideally, two reviewers should peruse the code independently. At a minimum, the reviewers should examine the code with respect to:
  • The code must adhere to the coding and comments guidelines.
  • The code must be clear and understandable.
  • The code must be efficient and logical.
Next, they should meet with the developer to discuss the code and recommend changes. Such a meeting should stress both the positive and problematic aspects of the code. All of the agreed upon changes, if any, should be documented. The developer should then implement the changes and this should then be verified by either one of the reviewers or a third person. Finally, the review case is marked closed.
 
As you can gather, the code review process is essentialy treated in the same way as bug reports. Issues should be properly documented, they should be marked complete once they have been addressed, and the product should not ship until all matters have been addressed.
 
Conclusion
These are the steps that are recommended for a small engineering organization to take in order to make the difficult transition from individual effort to collaborative effort. These recommendations have been casted as informally as possible. It is critical that, for each process improvement, the first step be sufficiently small that you don’t end up with the organization in an uproar about how exactly to go about implementing the process improvement! Remember, each of these steps is an attempt to improve the ease with which people can cooperate at their actual work. None of these processes should grow to dominate over the actual work of designing, implementing, debugging, and shipping the actual product.
 
The Guru wishes to thank J. Michael Hammond for this article.
 
Return to Part One