 The first question is about terminology. In the course video, you say there are 4 steps for “Recursive construction”, where the final step is “prove the fixpoint is the least one”. While in the solution for 356 and several other questions, where the question says : “what does recursive construction yield? Does it satisfy the fixedpoint equation?” You didn’t show the construction is the least one. My question is, when I encounter the phrase “recursive construction” in an exam, does my answer has to be the least one?
1.1 Consider the first and second step of “Recursive construction”, do we always have to show the first several expansion? For the example in the lecture video and like question 370 the expansion is not necessary for the guess and sometimes (especially for 370) expansion is very troublesome.
1.2 For the solution of the question 373 and 374, I don’t know why they are the strongest implementable fixpoint. I see that the solution doesn’t provide a proof either, does it mean that we don’t have to worry about how to formally show it? (Also, I don’t understand the strongest fixpoint in the example in the video. Why there are only those solutions for that fixpoint equations? Why is that the weakest among all the solution? (I know I can prove it is the weakest among the shown solutions, what about the others? ))
1.2.1I suddenly realized that you said the set of solutions is a lattice. So I start recursive construction from "T", which is the smallest(weakest), I can yield back the weakest solution. Is that what you mean?
1.2.2 While that theorem, if I remembered correctly, need the F in the fixpoint equation to be nondecreasing.
1.3 The solution for 375 is blank
1.4 The solution for 357 involving LIM. Is using LIM a necessary step in “recursive construction”? And I am not sure how to compute LIM. Is it because (forall m. exists n. x:P(m+n)) always yield false thus that is null?
Questions on Recursive Definition
You ask “My question is, when I encounter the phrase “recursive construction” in an exam, does my answer has to be the least one?”. Hopefully the question should make it clear whether it has to be the least one. If the question does not make it clear, raise your hand and ask.
You ask: “Consider the first and second step of “Recursive construction”, do we always have to show the first several expansion?”. Only if the question says so. If you can see the general case without showing the first few steps of construction, then you don’t have to.
You ask: “For the solution of the question 373 and 374, I don’t know why they are the strongest implementable fixpoint.”. Ex.373 and 374 say “the strongest implementable solution that you can find”, which might not be the strongest that there is. However, if the solution is deterministic, then there is no stronger implementable solution.
You are right that the videos and textbook sometimes make statements without proof. The proof may use math, for example lattice theory, that I did not present.
You ask: “Is using LIM a necessary step in “recursive construction”?”. I say no. (There are other people who would say yes.) Furthermore, my definition of LIM differs from some other book’s definitions (and they differ from each other). I have not talked about computing LIM, and I don’t want to take the course in that direction.
In Ex.357, for any x, (LIM n· x: P_n) is false because for all n past x, (x:P_n) is false. And (§x: false) is null.

[quote=“hehner, post:2, topic:607”]
You are right that the videos and textbook sometimes make statements without proof. The proof may use math, for example lattice theory, that I did not present.
[/quote]
I find these things intriguing. Where can I find the relevant subject? A book name is helpful enough. 
At the end of pg.96: “Finding a limit is harder than making a substitution, and the result is still not guaranteed to produce a fixedpoint. We could define a property, called “continuity”, which, together with monotonicity, is sufficient to guarantee that the limit is the least fixedpoint, but we leave the subject to other books.”
Which “other books” are you referring to?
The article that started us along the path of using limits and continuity for recursive construction is
D.S.Scott, C.Strachey: “Outline of a Mathematical Theory of Computation”, technical report PRG2, Oxford University, 1970; also Proceedings of the fourth annual Princeton Conference on Information Sciences and Systems, pages 169,…177, 1970
It won them a Turing Award, but it’s old and hard to read. I wrote a book that contains some of the material on limits and continuity
E.C.R.Hehner: the Logic of Programming, PrenticeHall International, 1984
But I’m not recommending it. A book I like a lot, that develops and uses lattice theory is
R.J.R.Back, J.vonWright: Refinement Calculus: a Systematic Introduction, Springer, 1998
A great book for lattice theory and limits and continuity as used in program semantics is
E.W.Dijkstra, C.S.Scholten: Predicate Calculus and Program Semantics, Springer, 1990
There are many math texts on limits and continuity, but they are interested in realvalued functions/sequences only, and do not define limits for binaryvalued functions/sequences.
There are many math texts about lattice theory. The classic, and the one I learned from in 1967, is
G. Birkhoff: Lattice Theory, Providence, R.I., 1967. third edition
But I prefer those texts that are written from a computer science point of view. One is
V.K.Garg: Lattice Theory with Applications, U.Texas at Austin, 2003
You can download it from
Chapter 17 is “FixedPoint Theory”.
Since this question is still about recursive construction, I will post it here.
In chapter 5.2, we know that
we consider the refinement
69 5 Programming Language
W ⇐ while b do P od
to be an alternative notation for the refinement
W ⇐ if b then P. W else ok fi
However, I don’t know how this theorem above can be deduced by the axioms introduced in chapter 6?
t′≥t ⇐ while b do P od
if b then P. t:= t+1. while b do P od else ok fi ⇐ while b do P od
∀σ, σ′· t′≥t ∧ if b then P. t:= t+1. W else ok fi ⇐ W
⇒ ∀σ, σ′· while b do P od ⇐ W
Or actually, they are totally different things?
Yes, the way of dealing with loops in Chapter 5 and the way in Chapter 6 are totally different. Neither way can be deduced from the other. Each way lets us prove things the other way cannot prove. As stated at the end of p.99, if x is a state variable and b is any binary expression,
x’≥x ⇐ while b do x:= x+1 od
seems reasonable to me. If the body of the loop doesn’t decrease x, then the loop doesn’t decrease x, no matter what b is. And this is easily proven by the method of Chapter 5. But it is not provable at all by the definition of whileloop in Chapter 6. Also, the Chapter 5 method allows us to separate the results proof from the timing proof; the Chapter 6 definition does not. On the other hand, the whileinduction axiom in Chapter 6 is not provable by the methods of Chapter 5. But the whileconstruction axiom in Chapter 6 can be proven by the methods of Chapter 5.
I think the Chapter 5 methods are easier to use for proving the refinements that arise in programming. The Chapter 6 loop definition is the standard definition used by people in the field of program semantics. So I present both in my book.