Artificial Intelligence II

CSE 5361 Section 001 Spring 2005

Homework 2

Due: February 22, 2005 (midnight). No late submissions accepted.

For this homework you will implement a State class and use it to implement a Reflex-Agent-With-State, as in Figures 2.11 and 2.12 of Russell and Norvig, to navigate the wumpus world. The specific requirements are enumerated below.

  1. Download, install and run the wumpus-world simulator, which is available here. There is a readme file with the distribution. Your homeworks will mostly involve modifying the file client/SimpleClient.java. Of course, you can also create a new class, say HW2Client, in a separate file based on the SimpleCient class.

  2. For this homework, we will limit the extent of the world to everything left of a verticle wall of obstacles from cell [21,2] to cell [21,5]. You may assume at least one gold piece, and at least one wumpus. All wumpi will sit still. You may also assume that location [1,1] will always be empty, i.e., it is a safe starting location. A sample world meeting these contraints is available here.

  3. Implement a State class to hold information about the agent (e.g., location, arrows, gold) and information gained about the current wumpus world (e.g., locations of stenches, breezes, pits, wumpi, gold, walls). Your agent will be tested by running several trials on each of several worlds as constrained above, so you should retain state information from one trial to the next.

  4. Implement a method inside your agent called UpdateState that changes the State instance based on the recently received percept. For example, if you perceive a glitter in location [x,y], then you know that gold is there, and will always be there in subsequent trials.

  5. Implement the ChooseAction method that returns the action based on the state information and condition-action rules. This procedure replaces the two lines calling Rule-Match and Rule-Action in the Reflex-Agent-With-State. Since we do not have a knowledge based built into the client, you can implement the condition-action rules in code, but try to be as modular as possible. For example, if the last percept contained glitter, then GRAB would be a good action. If you know a square is "unsafe", you might want to move away (if cautious) or move there (if adventurous). Most of your effort on this assignment should go into making ChooseAction as intelligent as possible. Corresponding effort in UpdateState will facilitate better action choices.

  6. Document the design of your Reflex-Agent-With-State in a separate document (MSWord, PDF or text). This document should describe what is maintained by your State, how it is updated, and how you use it to choose an action.

  7. Submit all source code and design document files in one zip file to me (holder@cse.uta.edu) by the above deadline. Also include in your submission a README file that describes what files are in your submission and any special instructions for building and running your agent. In addition to correct functionality and satisfaction of the above constraints, your submission will be graded based on good programming style and documentation.