PROGRAMMING ASSIGNMENT #2:

Empirical Evaluation of Solutions for Maximum Subsequence Sum Problem

Fall 2012

Due: October 5, 2012, 11:59PM on Angel Dropbox
(no extensions will be allowed)


General Guidelines:


PROBLEM    Maximum subsequence sum problem:

For this assignment you will be comparing the performance of the four different algorithms we discussed in class for the maximum subsequence sum problem. Here are the details:

 

Report:

In a separate document (Word or PDF), compile the following sections:

·   A: Problem statement. In 1-2 sentences state the goal(s) of this exercise.

·     C: Experimental Results:  In this section, include the following:

o       The plots from the above test results

o       Are the observations made in the above plots as per your theoretical expectations? If so, why, and if not, why not? Explain.


FINAL CHECKLIST FOR SUBMISSION:

    ___  Cover sheet

    ___  A separate folder containing all your source code (including the main function you used for testing)

    ___  Report

    ___  Both the above zipped into another folder archive called Program2<YourLastName>.zip

 


GRADING RUBRICS:

This assignment is mainly about empirical testing and analysis. There is not really a design component (except may be for your test code that calls the different function versions, times them and generates a timing report). So for grading this PA, we will primarily look at how well you have designed experiments, what the plots look like, and have you offered satisfactory/convincing rationale to explain your observations.Therefore the points during grading will be distributed as follows:

The whole assignment is worth a total of 100 points.

----------------------------------------

CODING (45 pts):

(10 pts): Are all versions of the algorithms and test driver implemented correctly?

(10 pts): Is the code implemented in an efficient way? i.e., are there parts in the code that appear redundant, or implemented in ways that can be easily improved? Does the code conform to good coding practices of Objected Oriented programming?

(15 pts): Does the code compile and run successfully on a couple of test cases?

(10 pts): Is the code documented well and generally easy to read (with helpful comments and pointers)?

REPORT (55 pts):

(15 pts): Is the experimental plan technically sound? Is the experimental setup specified clearly?

(10 pts): Are results shown as plots in a well annotated manner and are general trends in the results visible?

(30 pts): Are the justifications/reasons provided to explain the observations analytically sound? Is there a reasonable attempt to explain anomalies (i.e., results that go against analytical expectations), if any?

----------------------------------------

Obviously to come up with the above evaluation, the TAs are going to both read and run your code.