11 Feb 2006 15:28:36 -0500

Related articles |
---|

[4 earlier articles] |

Re: Compiler Correctness neelk@cs.cmu.edu (Neelakantan Krishnaswami) (2006-02-11) |

Re: Compiler Correctness danwang74@gmail.com (Daniel C. Wang) (2006-02-11) |

Re: Compiler Correctness torbenm@app-1.diku.dk (2006-02-11) |

Re: Compiler Correctness henry@spsystems.net (2006-02-11) |

Re: Compiler Correctness Sid.Touati@inria.fr (Sid Touati) (2006-02-11) |

Re: Compiler Correctness haberg@math.su.se (2006-02-11) |

Re: Compiler Correctness haberg@math.su.se (2006-02-11) |

Re: Compiler Correctness henry@spsystems.net (2006-02-11) |

Re: Compiler Correctness Xavier.Leroy@inria.fr (Xavier Leroy) (2006-02-12) |

Re: Compiler Correctness DrDiettrich@compuserve.de (Hans-Peter Diettrich) (2006-02-14) |

Re: Compiler Correctness marcov@stack.nl (Marco van de Voort) (2006-02-14) |

Re: Compiler Correctness DrDiettrich@compuserve.de (Hans-Peter Diettrich) (2006-02-14) |

Re: Compiler Correctness henry@spsystems.net (2006-02-17) |

[4 later articles] |

From: | haberg@math.su.se (Hans Aberg) |

Newsgroups: | comp.compilers |

Date: | 11 Feb 2006 15:28:36 -0500 |

Organization: | Mathematics |

References: | 06-02-053 06-02-061 06-02-064 |

Keywords: | theory |

Posted-Date: | 11 Feb 2006 15:28:36 EST |

Neelakantan Krishnaswami <neelk@cs.cmu.edu> wrote:

*> Basically, the idea is that a theorem prover is inherently*

*> untrustworthy. It's large, complex, and relies on very advanced math*

*> for its correct operation -- it will have bugs with basically 100%*

*> certainty.*

*> So what these programs do is construct the actual proof of the theorem*

*> as a data structure which you can print out. Then, you can write a*

*> proof *checker* to verify the operations that the theorem prover has*

*> carried out.*

This is interesting, because, in working on a theorem prover, I have

reached the same conclusions. Particularly, the question of bound

variables is tricky.

I think, though, one might combine a theorem prover and a proof

checker, so that the faulty proofs are detected right away. This is

what I am about to do, at least. Roughly, what is needed, is that

the substitutions that the theorem prover engine (think of Prolog, but

more complicated) discovers are then verified for correctness.

*> A proof checker is much smaller than the theorem prover; usually they*

*> are a few hundred lines of code. Thus, they are small enough that*

*> Perlis's criticisms don't apply to them -- the checkers are programs*

*> that people can manually verify and easily write multiple competing*

*> implementations to increase trust.*

One example of a small, standalone one is MetaMath

<http://us.metamath.org/>. A mathematical proof consists essentially

of finding suitable substitutions applied to a suitable sequence of

axioms which leads up to the statement one wants to prove. So all the

proof checker needs to do is that the found sequence of axioms (and

formerly proved statements, as they can be formally expanded) and

substitutions are applied correctly.

*> So, the process is:*

*>*

*> 1. You develop your proof interactively with the prover.*

*> 2. You dump out the proof and run your proof checkers on it.*

*> 3. If it says your proof is good, then you're done.*

*> 4. If it says your proof is bad, then you curse, and fix the*

*> bug in the theorem prover and redo your proof.*

I think the point 4 here is very important. :-) It is very difficult to

develop a bug-free theorem prover.

*> Interestingly, this is a technique that Leroy used in his compiler.*

*> For example, the register allocator would find a coloring, and then*

*> he'd run a checker on it to ensure that it was okay before proceeding.*

*> If the check failed, then his compiler would report that it failed to*

*> compile that program rather than emitting incorrect code.*

Therefore, if one would want to do a compiler verifier, one might be

better off to isolate a smaller, algorithmic language, which is easier to

implement a bug-freer one.

*> IIRC, the logic in Coq is equivalent to ZFC with countably many*

*> inaccessible cardinals. That's a very rich foundation, but still*

*> pretty conventional.*

The consistency of ZFC and similar set theories cannot be proved (by

Goedel theorems) unless one invokes a more powerful metamathematics. And

if it is not consistent, all mathematics falls, due to the excluded third

(all statement true all false). One then ends up on a belief that the

axioms are sufficient, because what should be used to prove

the metamathematical theory one invokes to prove these theories? - To do

that, one has to invoke a still more powerful metametamathematics, and so

on. In addition, some mathematical fields require extensions of ZFC

(typically in category related fields). So, one should not assume that ZFC

is a blessing for "all known mathematics", as is a popular belief, it

seems.

Most working math is likely to be constructible within a much smaller

theory than ZFC and such similar axiomatic set theories, as one usually

has some rather basic explicit objects at the very bottom, by which the

theoretical extensions just come in as tools to be able to handle this in

an efficient manner. And the algorithmic parts used in a compiler should

be even smaller, just dealing with some rather explicit binary number

manipulations.

So, again, for a compiler verifier, perhaps ii helps focusing on a smaller

theory.

--

Hans Aberg

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.