Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Peter Lee: Proof-Carrying Code
[go: Go Back, main page]

Peter Lee

Proof-Carrying Code

Quick links to:
  • Safe Kernel Extensions Without Run-Time Checking, by George Necula and Peter Lee, OSDI'96. This is probably the best overview/introduction to the concept. Available in either Postscript or HTML formats.
  • Proof-Carrying Code, by George Necula, POPL'97. This provides a quick overview of the theoretical foundations of PCC.
  • Research on Proof-Carrying Code for Mobile Code Security, by Peter Lee and George Necula, DARPA Workshop on Foundations of Mobile Code Security. Read this short position paper for the quickest overview of PCC.
  • George Necula's home page. This contains an up-to-date annotated bibliography on PCC and Certifying Compilers.
  • An Overview of Proof-Carrying Code

    Proof-Carrying Code (PCC) is a technique by which a code consumer (e.g., host) can verify that code provided by an untrusted code producer adheres to a predefined set of safety rules. These rules, also referred to as the safety policy, are chosen by the code consumer in such a way that they are sufficient guarantees for safe behavior of programs. There are many potential applications of PCC. For example, for mobile code the code consumer would be an Internet host (e.g., a web browser) and the code producer a server that sends applets. In operating systems, one can have the kernel act as the host, with untrusted applications acting as code producers that download and execute code in the kernel's address space.

    The key idea behind proof-carrying code is that the code producer is required to create a formal safety proof that attests to the fact that the code respects the defined safety policy. Then, the code consumer is able to use a simple and fast proof validator to check, with certainty, that the proof is valid and hence the foreign code is safe to execute.

    Any implementation of PCC must contain at least four elements: (1) a formal specification language used to express the safety policy, (2) a formal semantics of the language used by the untrusted code, usually in the form of a logic relating programs to specifications, (3) a language used to express the proofs, and (4) an algorithm for validating proofs.

    In our current implementation the untrusted code is in the form of machine code for the DEC Alpha processor and the specifications are written in first-order logic. As a means of relating machine code to specifications we use a form of Floyd's verification-condition generator that extracts the safety properties of a machine code program as a predicate in first-order logic. This predicate must then be proved by the code producer using axioms and inference rules supplied by the code consumer as part of the safety policy.

    Finally, we use the Edinburgh Logical Framework (LF), which is essentially a typed lambda calculus, to encode and check the proofs. The basic tenet of LF is that proofs are represented as expressions and predicates as types. In order to check the validity of a proof we only need to typecheck its representation. The proof validator is therefore an LF typechecker, which is not only extremely simple and efficient but independent of the particular safety policy or logic used. This realization of PCC using LF is described in detail in a technical report.

    Note the many instances where elements from logic, type theory and programming language semantics arise in a realization of PCC. Extended use of these formal systems is required in order to be able to make any guarantees about the safety of the approach. In fact, we are able to prove theorems that guarantee the safety of the PCC technique modulo a correct implementation of the LF typechecker and a sound safety policy.

    Advantages of proof-carrying code

    There are many advantages in using PCC for mobile code. First, although there might be a large amount of effort in establishing and formally proving the safety of the mobile code, almost the entire burden of doing this is shifted to the code producer. The code consumer, on the other hand, has only to perform a fast, simple, and easy-to-trust proof-checking process. The trustworthiness of the proof-checker is an important advantage over approaches that involve the use of complex compilers or interpreters in the code consumer.

    Second, the code consumer does not care how the proofs are constructed. In our current experiments, we rely on a theorem prover, but in general there is no reason (except the effort required) that the proofs could not be generated by hand. We also believe that it is feasible to build a certifying compiler that builds proofs automatically through the process of compilation. No matter how the proofs are generated, there is also an important advantage that the consumer does not have to trust the proof-generation process.

    Third, PCC programs are ``tamperproof,'' in the sense that any modification (either accidental or malicious) will result in one of three possible outcomes: (1) the proof will no longer be valid (that is, it will no longer typecheck), and so the program will be rejected, (2) the proof will be valid, but will not be a safety proof for the program, and so again the program will be rejected, or (3) the proof will still be valid and will still be a safety proof for the program, despite the modifications. In the third case, even though the behavior of the program might have been changed, the guarantee of safety still holds.

    Fourth, no cryptography or trusted third parties are required because PCC is checking intrinsic properties of the code and not its origin. In this sense, PCC programs are ``self-certifying.'' On the other hand, PCC is completely compatible with other approaches to mobile-code security. For example, in another essay, we discuss how trust management and PCC can be used together for mobile code security. We also have some experience in using PCC to determine the correctness of applying Software Fault Isolation to network packet filters. In engineering terms, combining approaches leads to different tradeoffs (e.g., less effort required in proof generation at the cost of slower run-time performance) that lead to greater system design flexibility.

    Fifth, because the untrusted code is verified statically, before being executed, we not only save execution time but we detect potentially hazardous operations early, thus avoiding the situations when the code consumer must kill the untrusted process after it has acquired resources or modified state.

    These five advantages are essentially statements about the advantage of static checking over dynamic checking. We believe that static checking is essential for mobile-code security, and that system designers in general have a somewhat limited view of how static checking can be used.

    Early experience

    In order to gain more experience with PCC and to measure its costs we have performed a series of experiments. We started with simple but practical applications such as machine code implementations of network packet filters and the IP checksum routine. For these early experiments the safety policy was focused on fine-grained memory safety. Our packet filters were about 30% to 10 times faster than comparable filters implemented using other approaches, while the safety proofs were smaller than 800 bytes and required no more than 3ms on a DEC Alpha to be validated.

    We continued our experimentation with more complex safety policies. In one experiment, the ``active ping,'' we write extensions of packet filters that can also allocate and deallocate memory, acquire and release locks and send network packets. In addition to the memory safety we also check that all of the acquired resources are released within a specified time bound. We also put bounds on the quantity of resources used, including the total number of packets sent and total number of instructions executed. For this example, the safety proof increased to 3.5Kbytes and the validation time to 18ms. However we believe that we have only touched the surface of possible proof encoding techniques and much more efficient representations are possible.

    For more details, see the papers listed at the top of this page. You may also contact me or George Necula.