Image goes here
Semantics
CptS 355 - Programming Language Design
Washington State University

Dynamic Semantics

The dynamic semantics refers to the question "what does this program compute?" Often the approach taken to answer this is to look at how the program state (the collection of values of the variables in the program along with the program counter) evolves as the computation proceeds from statement to statement. Usually, we are interested primarily in knowing about the final state. Since programs in many languages evaluate statement by statement, the dynamic semantics is description or understanding of precisely how a statement modifies the state. There are three kinds of approaches to semantics.
  • Operational semantics - Describe precisely the meaning of each statement in a high-level language in terms of a low-level language. My description of PostScript was essentially done by giving an informal operational semantics.
  • Axiomatic semantics - Useful in proving correctness of programs. Think of program as a proof and each statement as an axiom. The axiom relates a precondition (about the state) to a postcondition (about the state).
  • Denotational semantics - Each statement can be modeled as a function that relates the input (the state prior to the statement) to the output (the state after the statement). The program is modeled by composing functions.

Operational Semantics

This is a commonly-used semantics. It is easiest for the compiler-implementer to use. You can think of this as a precise description of how to map a statement in a high-level language to a low-level intermediate code. For example, consider a statement of the form.
  if <predicate> then
    <stmt_list>1
  else
    <stmt_list>2
Operational semantics can be expressed by means of a translation function, T, that maps programs in a high-level language to programs in a lower-level language. For example. T applied to the if statement above might yield:
     T(<predicate>)
     # Assume boolean result of predicate ends up in R1
     eq  R1, false, L1:
     T(<stmt_list>1)
     goto L2:
L1:
      T(<stmt_list>2)
L2:
Note that we are assuming that the meaning or semantics of the low-level language is understood, sound, and rigorous.

Axiomatic Semantics

Mitchell's textbook does not cover axiomatic semantics so it is particularly important that you learn this material in lecture and by doing the homework. Because you will have not had homework on this before the first exam, questions on axiomatic semantics will not appear until the second exam.

While operational semantics tells us in detail how a program is to be executed, the goal of axiomatic semantics is to help us understand what a program does.

Review the material covered in Math 216 on propositional and predicate logic as a prelude to tackling this section. You are likely to need DeMorgan's laws for negation of conjunctions (and) and disjunctions (or) and also their extension to statements quantified using "for all" and "there exists".

Fundamental to understanding axiomatic semantics is an understanding of state. For our purposes here, a state of a computation (of some program) is simply the values of all the variables in the program. We use predicate logic formulas to describe properties of states that are of interest to us. Examples

    x = y            %% true in states where x and y have equal values
    z < x + y        %% true in states where z is less than x+y
In the context of axiomatic semantics these will be called assertions and will be written inside curly braces {}. Annotating a point in a program with an assertion means "whenever the program is executing at that point, the assertion is true of the current state". The assertion before a statement is called its precondition and the one after is called its postcondition. Example: if S is a statement in a program and we see
  {P} S {Q}
then P is the precondition and Q is the postcondition.

The fundamental insight of axiomatic semantics is that if we want some property to be true of the state after executing a statement it is straightforward to determine what must be true of the state before executing the statement. The before-to-after direction isn't nearly as easy, so axiomatic semantics is initially counter-intuitive and feels like it works "backwards" to many people.

Suppose that I have some statement S with postcondition Q that I wish to be true of the state after executing S. The weakest precondition of S with respect to Q, written wp(S, {Q}), is a predicate that describes all of the states from which executing S leads to a state satisfying Q.

An axiomatic semantics is essentially a set of rules for reasoning about and deriving weakest preconditions for statements of a programming language.

Notice that wp() is a bit like an integral expression in calculus in the following way: it has a well-defined meaning, but without applying some rules to transform it to a simpler form, we won't understand very much about what it is telling us of the initial state. Indeed, axiomatic semantics is sometimes called the weakest-precondition calculus.

Assignment

The weakest precondition for the assignment statement x = E, where E is an expression and x is a variable, wp(x=E, {Q}), is
   Qx -> E
meaning x is replaced in Q by E. For example,
  wp(x = y * 5, {x = 10 and z = 4})
is {y * 5 = 10 and z = 4} or equivalently {y = 2 and z = 4} after algebraic simplification. As another example consider
  wp(x = x * 5, {x < 10 and z = 4})
Applying the substitution rule for assignment yields {x * 5 < 10 and z = 4} or equivalently {x < 2 and z = 4}.

We want the weakest precondition since often there are many possible preconditions. Consider

  wp(x = y * 5, {x < 10})
One precondition is {x < 0} but it is not the "weakest" since if x is either 1 or 0 prior to the assignment, the postcondition is still satisfied.

Sequences

The sequence rule:
   wp( S1; S2, {Q})
is
   wp(S1, wp(S2, {Q}))
So for example
   wp(x1 = E1 ; x2 = E2, {Q})
is
  wp(x1 = E1, {P'})
where
  P' = Qx2 -> E2
so
  wp(x1 = E1 ; x2 = E2, {Q})
is
  (Qx2 -> E2)x1 -> E1
As a concrete example:
  wp(x=z+1; y=x*2, {y=10})
is
  wp(x=z+1, {x*2=10})
is
  (z+1)*2=10
or equivalently by solving for z:
  z = 4
Notice that the application of these rules is completely straightforward: The rules tell you exactly what has to be done to go from wp(S, {Q}) to a formula that doesn't involve wp(); again in analogy to integrals: once you know the rules for powers of x you can just apply them without thinking about it.

A compact representation of the sequence rule is in asserted program form

   {P} S1 {Q} S2 {R}
where Q => wp(S2, R) and P => wp(S1, Q).

Selection (if)

The axiomatic meaning of if:
  wp(if B then S1 else S2, {Q})
is
  (B => wp(S1, {Q})) and (not B => wp(S2, {Q}))
Let's look at an example.
  wp(if x < 0 then x = y + 1 else x = y - 1, {x = 4})
is
 (x < 0 => y=3) and (x >= 0 => y=5)
Equivalently, using def'n of => and the distributive law
  (x < 0 and y = 3) or (x >= 0 and y = 5) 

A compact representation of the selection rule is in asserted program form

   {P}
   if B then {P and B} S1 {Q} else {P and not B} S2 {Q}
   {Q} 
where (P and B)=> wp(S1,Q) and (P and not B) => wp(S2,Q).

Loops

Loops are harder to reason about because we don't know how many times the loop body will be executed. Like the general integration problem in calculus, computing the weakest precondition of a loop is hard, unlike the other program structures we have looked at.

Loops are a powerful programming tool to represent much computation with little code. Axiomatic semantics has an equally powerful tool to reason about what loops do. The key to reasoning about loops is to identify a predicated called a loop invariant, denoted I. The invariant is assertion that is true before each execution of the loop test (including the first). Using invariants is like using induction: we show that it is true for the base case and show that if it is true for n-1 iterations then it is true for n iterations. The rule for

  while B do S end
is easiest (not easy!) to understand when presented in asserted program form.
  {I and B} S {I} implies {I} while B do S end {I and (not B)}
which spelled out in wp() terminology says if
  (I and B) => wp(S, {I})
then
  I => wp(while B do S, {I and not B})
Therefore, using this approach to establish {P} while B do S {Q} we must establish four things.
  1) P implies I
  2) {I and B} S {I}
  3) (I and (not B)) implies Q
  4) the loop terminates
We have to find candidates for the invariant by experience, insight, experiment. Once we have a candidate we can test it using the rules above to see if it meets our needs. Let's look at an example.
  while x != y do y = y - 1 {x = y}
For zero iterations the weakest precondition is
  {x = y}
For one iteration the weakest precondition is
  {x = y - 1}
For two iterations the weakest precondition is
  {x = y - 2}
For n iterations the weakest precondition is
  {x = y - n}
We know that for any non-negative n
  {x = y - n} implies {x <= y}
So our loop invariant is
  {x <= y}
which we will also choose for P. Now let's check whether {I and B} S {I} holds.
  {x <= y and x != y} y = y - 1 {x <= y}
and it does since
  {x <= y and x != y} implies {x < y} 
and
  {x < y} y = y - 1 {x <= y}
Now let's check if (I and (not B)) implies Q.
  {x <= y and (not x != y)} implies {x = y}
simplified
  {x <= y and x = y} implies {x = y}
which we know is true since
  P and Q implies Q
Finally, we have to check that the loop terminates. Informally, we observe that at each iteration the difference between x and y decreases by one and that if the two are ever equal the loop exits. So indeed the loop terminates when x starts out no bigger than y. If we can show loop termination, we have total correctness. If we can't we have only partial correctness.

Denotational Semantics

Denotational semantics is covered extensively in the book but I will not be lecturing on it and it will not appear on the test.

Source of Information

These lecture notes are based in part on Chapter 3 in "Programming Languages, 6ed" by Robert Sebesta and "An Axiomatic Basis for Computer Programming" by C.A.R. Hoare (linked as notes for this material on the course web page).
(c) 2003 Curtis Dyreson, (c) 2004-2006 Carl H. Hauser           E-mail questions or comments to Prof. Carl Hauser