------------------------ Exercise #08 for CST8129 ------------------------ -Ian! D. Allen - idallen@idallen.ca Remember - knowing how to find out an answer is more important than memorizing the answer. Learn to fish! RTFM! (Read The Fine Manual) Global weight: 5% of your total mark this term Due date: 10h00 Friday November 11, 2005 The online deliverables for this exercise are to be submitted online via the T127 Linux Lab using the submit method described in the exercise description, below. No paper; no email; no FTP. Late-submission date: I will accept without penalty online exercises that are submitted late but before 14h00 Monday November 14, 2005 After that late-submission date, the exercise is worth zero marks. Exercises submitted by the *due date* will be marked online and your marks will be sent to you by email after the late-submission date. Students wanting "early marking" before the end of the November 11 course drop date should let me know. This exercise is due at 10h00 Friday November 11, 2005. Exercise Synopsis: Marks: 5% Update your sorting script to include prompts and input validation. Use the sorted numbers to output a list of numbers and a total. Develop a "white-box" test plan that tests all pathways in your code. Execute the test plan; or, write a script to do it for you. Part I - PDL and code - weight 20% Part II - Test Plan - weight 40% Part III - Testing - weight 40% Where to work: Do your Unix command line work on any WT127 workstation. (You may login to the workstation remotely.) The files you work on will remain in your account after you log off. Do not erase your files after submission; always keep a spare copy of your exercises. WARNING: Do not attempt this exercise on a Windows machine - the text file format is different. You must connect to and work on Unix/Linux. Note that you may connect to a lab workstation *from* a Windows machine (using PuTTY); however, you may not use the Windows machine itself to do your work. Use the vim editor on the Linux machine. Location of the course notes on the Lab workstations: You can find a copy of all the course Notes files on any Lab workstation under directory: ~alleni/public_html/teaching/cst8129/05f/notes/ You can copy files from this directory to your own account for modification or study, if you like. (To avoid plagiarism charges, you must credit any material that you copy and submit unchanged as your own work.) Location of the textbook CDROM files on the Lab workstations: The CDROM files for the Quigley textbook are available in the WT127 Lab under the directory: /home/cst8129/ Exercise Preparation: A. Know where to find an online copy of all the course Notes on the Lab workstations. (See above.) You can get a copy of this exercise from the course notes. B. Complete the online Course Notes readings. Any questions? See me in a lab or post questions to the Discussion news group (on the top left of the Course Home Page). ------------------- Part I - PDL and coding - weight 20% ------------------- Specifications and coding: -------------------------- Write PDL and then a script that needs three integers from the user. The integers may come from the command line, or through prompting for any or all missing arguments (up to three). Put the numbers in ascending order: smallest, middle, largest Output all the numbers from the smallest up to the largest; but, do not output the middle number or the number one above or one below the middle number. Count each number as it is output, and keep a running total of the sum of the numbers output, and print this at the end of the script. Warning: You expect that you will be asked to modify this script in future to enhance its functionality. Code your script to make it easy to modify. Coding details: Input validation (including prompting and reading missing arguments): Exit the script with a good four-part error message (see Notes file script_style.txt for the four parts) and a bad return status if more than three arguments are supplied on the command line. The script should prompt the user (on standard error) for any missing numbers on the command line (less than three), and read the missing values from the keyboard using the shell's "read" command. Quick-exit the script non-zero if EOF is detected. Make sure all three of the items obtained from the user are numbers. (Adapt the code for this from script while1.sh.txt in the Notes.) Sorting: Use your working integer sort algorithm to put the user's three integers in ascending order. Once you have the three integers in order, output this line: The three numbers are (sorted): XXX YYY ZZZ where XXX is the user's smallest number, YYY the user's middle number, and ZZZ is the user's largest number. (Some or all of the numbers may be the same.) Output: Print, one per line, each integer from the user's smallest number up to the user's largest number (inclusive), except do not output the user's middle number or the numbers one above or one below it. Number each line of output, starting at 1. Keep a running total of the sum of the numbers actually printed. (Some user inputs will result in no output from this step.) Before the script exits, output the running total as follows: The sum of NUM numbers from XXX to ZZZ skipping numbers near YYY is: SUM where XXX, YYY, and ZZZ are as given above, NUM is the number of numbers that were actually output, and SUM is the running total of the numbers that were actually output. --------------------- Part II - Test Plan - weight 40% --------------------- Test Plan: ---------- Write an exhaustive "white-box" test plan for your script that will contain enough tests to exercise all your code and trigger each error message, prompt, and output statement at least once. Each test will specify a set of command line arguments and/or inputs and the expected output (including error messages) for those inputs. You will choose sufficient test cases so that every statement in your code is tested and executed at least once. (You do not have to exercise every combination of input, as long as your test plan makes sure that every line of your code executes at least once.) You must also specify sufficient test cases to demonstrate that your algorithm will likely work for all combinations of integer inputs. With only three numbers as input, the number of relative combinations to test the sorting algorithm is fairly small (about 13). You will also need to develop test cases that justify and show that the middle-number skipping code works for all possible inputs, and that the prompting for missing numbers works correctly. Each test must have a test number that you will refer to when you actually test the script. --------------------- Part III - Testing - weight 40% --------------------- Running your tests against your script: --------------------------------------- You must apply your test plan to your script, running every test that you have written in Part II. Your script does not have to pass all the tests to get a good mark in this section. The intent of this section is to show that you know *how* to test your script, even if the script isn't perfect yet. You may test your script in one of two ways. Choose one; don't do both. Method A (Manual Testing) is worth up to 79%; Method B (Automatic Testing) is the preferred method and is worth up to 120% (20% bonus). A. Manual Testing (instead of automatic) - max mark 79% : If you choose to do Manual Testing, the maximum mark you will receive for testing will be up to 79%. See "Automatic Testing" if you want to try for higher marks for testing. (Real-world tests are usually automated.) To demonstrate that you have tested your script, start a saved "script" terminal session and manually execute your script over and over, following every test in your test plan, testing your script and demonstrating the output produced. Before and after each test, interleave comment lines in your script test log that refer to test numbers in your test plan and whether you observer that your script passes or fails each test (see below). Your manual testing log will look like this (note that the shell ignores comment lines starting with '#'), with each test start and end clearly marked with a shell comment: $ ./script testlog.txt Script started, file is testlog.txt $ ### Test 1: test for more than three arguments $ ./exercise08script.sh 1 2 3 4 ... some correct output prints here ... $ ### Test 1: passed $ $ ### Test 2: test for non-integer arguments $ ./exercise08script.sh a b c ... some incorrect output prints here ... $ ### Test 2: failed test - did not implement this correctly yet $ $ ### Test 3: ... ...etc. for all of your tests... $ exit Script done, file is testlog.txt $ col -b exercise08testlog.txt When you are finished all of your tests, make sure you process the raw log file to remove backspaces and extra carriage returns (as shown above) before submitting it. You may lightly edit the test log before you hand it in, to remove typing mistakes that you made, to add blank lines to sepaarate tests, or to merge several test logs into one file. For full marks, the start and end of each test must be clearly numbered in the test log, in a manner similar to that shown above. You may edit the test log to add any missing start and end comments to the log after you have run all the tests. Separate each test by an empty line or empty prompt. Before submission, put your assignment label at the start of the test log file, with each line prefixed by an octothorpe character ('#'). B. Automatic Testing (instead of manual) - max mark 120% : If you choose to implement Automatic Testing, the maximum mark you will receive for testing will be up to 120% (up to a 20% bonus). To do Automatic Testing of your program instead of doing Manual Testing, you must write a second script (named exercise08tester.sh) that will run each of your tests for you. (Automatic Testing is the way real testing is done on the job!) Instead of running a saved manual terminal session and typing in all your test cases by hand, write a testing script named exercise08tester.sh that runs each test for you. You will not need to use the "script" command or a terminal log - the test script itself and its pass/fail output provides all the needed documentation. Your exercise08tester.sh script will run each of your test cases against your exercise08script.sh script and compare (using "diff") the actual output of your script with the expected output. The "diff" command returns a good status if there are no differences; use it to compare the actual output with the expected output and tell whether or not the test passed or failed. The exercise08tester.sh script you write to test your script might contain testing code that looks similar to this (but parametrized better to reduce duplicate code and make maintenance easier): # This is an excerpt from the exercise08tester.sh script # cd "$HOME/testplan7" || exit $? tmp="/tmp/$USER.$$" echo "Test 1: test for more than three arguments" ./exercise08script.sh 1 2 3 4 >"$tmp" 2>&1 if diff "$tmp" 1.txt ; then echo "Test 1: passed - output is same as 1.txt" else echo "Test 1: failed - output differs from expected output in 1.txt" fi # output some blank lines to visually separate each test echo "" echo "--------------------------------" echo "" echo "Test 2: test for non-integer arguments" ./exercise08script.sh a b c >"$tmp" 2>&1 if diff "$tmp" 2.txt ; then echo "Test 2: passed - output is same as 2.txt" else echo "Test 2: failed - output differs from expected output in 2.txt" fi ...etc... You will need to create a set of files containing the expected output for each test, for use by "diff". The contents of these files will come directly from your test plan expected output. You must know what the expected output is for your given input. The script you are testing will sometimes need to prompt for input from the user; you will need to provide that input inside your exercise08tester.sh script. (An automated test script must provide all inputs!) You can provide input to "read" statements in your script by using one or more echo statements piped into the standard input of the script being tested, using parentheses to surround multiple echo statements, e.g.: echo "Test 9: test for prompting and reading three different numbers" ( echo 3 ; echo 2 ; echo 1 ) | ./exercise08script.sh >"$tmp" 2>&1 if diff "$tmp" 9.txt ; then echo "Test 9: passed - output is same as 9.txt" else echo "Test 9: failed - output differs from expected output in 9.txt" fi The script's "read" statements will read numbers from the pipe, not from your keyboard; so, your tests can be fully automated. (You can use the trick with "tty -s" mentioned in shell_read.txt, to stop your script from issuing prompts when not reading from a keyboard.) Your exercise08tester.sh script must itself be well-written, including the CST8129 script header and internal documentation. Run your testing script and save the test output log for submission: $ ./exercise08tester.sh >exercise08testlog.txt 2>&1 Before submission, put your assignment label at the start of the test log file, with each line prefixed by an octothorpe character ('#'). Note: Your exercise08tester.sh will contain a lot of duplicated code, since the only difference between tests will be the test number and the script execution itself. When you learn how to write shell functions, you will be able to remove most of this duplication and write a single function that contains all the common code for all the tests. Stay tuned! ---------- Submission ---------- Submit the finished and fully labelled files (two or three) for marking using the following Linux command line: If you used manual testing (Part III maximum mark 79%): $ ~alleni/bin/copy8 exercise08script.sh exercise08testlog.txt If you used automatic testing (for Part III up to 20% bonus marks): $ ~alleni/bin/copy8 exercise08script.sh \ exercise08tester.sh exercise08testlog.txt This "copy8" program will copy the selected files to me for marking. You can copy the files more than once. Only the most recent copies will be marked. Always submit *all* your files for marking at the same time. P.S. Did you spell all the label fields and file names correctly?