Fall 2012

Due: September 19, 2012 @ 4:59PM P.T., submission on Angel

General Guidelines:


The goal of this programming exercise is to identify the performance and implementation tradeoffs between the linked list and array ADTs, by implementing the following game.

The MyJosephus problem is the following game: N people, numbered 0 to N-1, are sitting in a circle. Starting at person 0, a single hot potato is passed around. After M passes, the person holding the hot potato is eliminated, the circle closes ranks, and the game continues with the person who was sitting after the eliminated person picking up the hot potato to begin the next round of passing. The game is played until only one person is left (with the potato, of course), and that last remaining person wins (both the game and the potato!). For example, if M=0 and N=5, players are eliminated in order {0,1,2,3}, and player 4 (i.e., the 5th player) wins; If M=1 and N=5, the order of elimination is {1,3,0,4} before 2 wins.

An animated example for M=2, N=5 is provided here.

Your task is to provide two different implementations for the MyJosephus problem using STL list and vector, respectively. Your program should build upon the following source code:

Using the above source code, implement your own ListMyJosephus.cpp, VectorMyJosephus.cpp, and Person.cpp. Feel free to add more functions as needed for your implementation.  The above source code is provided to give you a template to start with - i.e., its by no means complete (or would even compile). You will have to make whatever changes necessary to complete, compile and getting it working for the project.

For testing and reporting, implement two separate test programs called TestListMyJosephus.cpp and TestVectorMyJosephus.cpp (not provided here). These programs should implement the following steps:

  1. Instantiate an object of the corresponding *MyJosephus class with a user-supplied parameter values for N and M;
  2. Play a full game until a winner is found;
  3. After each elimination round, output the list of players still left in the game, starting from the lowest player id.
  4. At the end of the game, report the elimination sequence and the winner (in a single line);
  5. Report the following timing statistics:
        i) total time for playing the game,
        ii) average elimination time which is the average for the time spent between two consecutive eliminations.
        While recording both these times, do NOT include time for step 3 (which prints the pending list after each round). You can comment out that part of the code while measuring performance. Its there strictly for your debugging reasons.
  6. The timing statistics should  be in seconds or milliseconds or microseconds (whichever gives the closest precision to measure the actual time of the event). Now, it may so happen that if a particular timed event's time is less than a second, your timer function will show 0 seconds. Obviously this does not mean 0 seconds. It just means you got to measure the time at a lower/finer resolution and use milliseconds or microseconds. If it turns out that the timed event is smaller than a microsecond, then you can consider that time to mean really "0" seconds - i.e., nothing to add to your timer variable.


In a separate written document (in Word or PDF), compile sections A through D  as below:

  • A: Problem statement. In one or two sentences, summarize the goal of the problem.
  • B: Algorithm design. Within one page, provide a brief description of the main ideas behind your algorithms for the two implementations. If you provide the information about how you have initialized the circular list or array of players at the start of the game, and describe the method to eliminate the next player at each step of the game using the underlying data structure, that will be sufficient. You can use pseudocodes (or plain English) to express this part in the report (use figures if possible). 
  • C: Experimental setup. In this section, you should provide a description of your experiment setup, which includes but is not limited to
  • D: Experimental Results & Discussion. In this section, you should report and compare the performance (running time) of the list and vector implementations based on your observations, and provide justification for your observations.

    For your testing and reporting, conduct the following two sets of experiments separately for each of your two implementations:

    In your report for the results, address the following points:



        Each submission should be accompanied by a cover sheet which is a single page document that contains all the information listed in this sample cover sheet: Word or PDF. Assignments without a cover sheet will NOT be graded.


        ___  Cover sheet

        ___  Zipped folder of the source code

        ___  Written report

        ___  All the above three zipped into another folder archive (named after your last name)


    Grading Rubrics:

    Obviously, this assignment is not just about implementing the Josephus game correctly and getting it working, but is also about exploring two design routes, comparing them both analytically and empirically, and reporting the performance results with justifications. Therefore the points during grading will be distributed as follows:

    Assume that the whole assignment is worth 100 points.


    CODING (50 pts):

    (10 pts): Does the code implement the Josephus game correctly?

    (15 pts): Is the code implemented in an efficient way? i.e., are there parts in the code that appear redundant, or implemented in ways that can be easily improved? Does the code conform to good coding practices of Objected Oriented programming?

    (15 pts): Does the code compile and run successfully on a couple of test cases?

    (10 pts): Is the code documented well and generally easy to read (with helpful comments and pointers)?

    REPORT (50 pts):

    (15  pts): Is the algorithm designed efficiently? Are appropriate data structures used (where choice is provided)? Does the algorithm exploit the advantages of the different data structures used in an effective manner?

    (5 pts): Experimental setup specified.

    (10 pts): Are results shown as plots in a well annotated manner and do the general trend of results make sense?

    (20 pts): Are the justifications/reasons provided to explain the observations analytically sound? Is there a reasonable attempt to explain anomalies (i.e., results that go against analytical expectations), if any?


    Obviously to come up with the above evaluation, the TAs are going to both read and run your code.