Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Vitaly Shmatikov
[go: Go Back, main page]

Vitaly Shmatikov


Security Analysis of Voice-over-IP Protocols
With Prateek Gupta.
To appear in Proc. of 20th IEEE Computer Security Foundations Symposium (CSF), Venice, Italy, July 2007.

Abstract:
The transmission of voice communications as datagram packets over IP networks, commonly known as Voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other.

The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the "human authentication" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.


Efficient Two-Party Secure Computation on Committed Inputs
With Stanisław Jarecki.
To appear in Proc. of Advances in Cryptology - EUROCRYPT 2007, Barcelona, Spain, May 2007.

Abstract:
We present an efficient construction of Yao's "garbled circuits" protocol for securely computing any two-party circuit on committed inputs. The protocol is secure in a universally composable way in the presence of malicious adversaries under the decisional composite residuosity (DCR) and strong RSA assumptions, in the common reference string model. The protocol requires a constant number of rounds (four-five in the standard model, two-three in the random oracle model, depending on whether both parties receive the output), O(|C|) modular exponentiations per player, and a bandwidth of O(|C|) group elements, where |C| is the size of the computed circuit.

Our technical tools are of independent interest. We propose a homomorphic, semantically secure variant of the Camenisch-Shoup verifiable cryptosystem, which uses shorter keys, is unambiguous (it is infeasible to generate two keys which successfully decrypt the same ciphertext), and allows efficient proofs that a committed plaintext is encrypted under a committed key.

Our second tool is a practical four-round (two-round in ROM) protocol for committed oblivious transfer on strings (string-COT) secure against malicious participants. The string-COT protocol takes a few exponentiations per player, and is UC-secure under the DCR assumption in the common reference string model. Previous protocols of comparable efficiency achieved either committed OT on bits, or standard (non-committed) OT on strings.


dFence: Transparent Network-Based Denial of Service Mitigation
With Ajay Mahimkar, Jasraj Dange, Harrick Vin, and Yin Zhang.
In Proc. of 4th USENIX Symposium on Networked Systems Design and Implementation (NSDI), Cambridge, MA, April 2007, pp. 327-340. USENIX, 2007.

Abstract:
Denial of service (DoS) attacks are a growing threat to the availability of Internet services. We present dFence, a novel network-based defense system for mitigating DoS attacks. The main thesis of dFence is complete transparency to the existing Internet infrastructure with no software modifications at either routers, or the end hosts. dFence dynamically introduces special-purpose middlebox devices into the data paths of the hosts under attack. By intercepting both directions of IP traffic (to and from attacked hosts) and applying stateful defense policies, dFence middleboxes effectively mitigate a broad range of spoofed and unspoofed attacks. We describe the architecture of the dFence middlebox, mechanisms for on-demand introduction and removal, and DoS mitigation policies, including defenses against DoS attacks on the middlebox itself. We evaluate our prototype implementation based on Intel IXP network processors.


How To Break Anonymity of the Netflix Prize Dataset
With Arvind Narayanan.
November 2006.

Abstract:
As part of the Netflix Prize contest, Netflix - the world's largest online movie rental service - publicly released a dataset containing movie ratings of 500,000 Netflix subscribers. The dataset is intended to be anonymous, and all personally identifying information has been removed.

We demonstrate that an attacker who knows only a little bit about an individual subscriber can easily identify this subscriber's record if it is present in the dataset, or, at the very least, identify a small set of records which include the subscriber's record. For example, an attacker who knows the subscriber's ratings on 6 movies that are not among the top 100 most rated movies and approximate dates when these ratings were entered into the system has an 80% chance of successfully identifying the subscriber's record. This knowledge need not be precise, e.g., the dates may only be known to the attacker with a 14-day error, the ratings may be known only approximately, and some of the ratings may even be completely wrong. Even without any knowledge of rating dates, knowing the subscriber's ratings on 6 movies outside the top 100 most rated movies reduces the set of plausible candidate records to 8 (out of almost 500,000) for 70% of the subscribers whose records have been released. With the candidate set so small, deanonymization can then be completed with additional human analysis.

Using the Internet Movie Database (IMDb) as our source of auxiliary information, we successfully identified Netflix records of a few non-anonymous IMDb users, uncovering information - such as their apparent political preferences - that could not be determined from their public IMDb ratings. Finally, we discuss the implications that a successful deanonymization of the Netflix dataset may have for the Netflix Prize competition.


Measuring Relationship Anonymity in Mix Networks
With Ming-Hsiu Wang.
In Proc. of 5th ACM Workshop on Privacy in the Electronic Society (WPES), Alexandria, VA, October 2006, pp. 59-62. ACM, 2006.

Abstract:
Many applications of mix networks such as anonymous Web browsing require relationship anonymity: it should be hard for the attacker to determine who is communicating with whom. Conventional methods for measuring anonymity, however, focus on sender anonymity instead. Sender anonymity guarantees that it is difficult for the attacker to determine the origin of any given message exiting the mix network, but this may not be sufficient to ensure relationship anonymity. Even if the attacker cannot identify the origin of messages arriving to some destination, relationship anonymity will fail if he can determine with high probability that at least one of the messages originated from a particular sender, without necessarily being able to recognize this message among others.

We give a formal definition and a calculation methodology for relationship anonymity. Our techniques are similar to those used for sender anonymity, but, unlike sender anonymity, relationship anonymity is sensitive to the distribution of message destinations. In particular, Zipfian distributions with skew values characteristic of Web browsing provide especially poor relationship anonymity. Our methodology takes route selection algorithms into account, and incorporates information-theoretic metrics such as entropy and min-entropy. We illustrate our methodology by calculating relationship anonymity in several simulated mix networks.


Timing Analysis in Low-Latency Mix Networks: Attacks and Defenses
With Ming-Hsiu Wang.
In Proc. of 11th European Symposium on Research in Computer Security (ESORICS), Hamburg, Germany, September 2006, vol. 4189 of Lecture Notes in Computer Science, pp. 18-33. Springer, 2006.

Abstract:
Mix networks are a popular mechanism for anonymous Internet communications. By routing IP traffic through an overlay chain of mixes, they aim to hide the relationship between its origin and destination. Using a realistic model of interactive Internet traffic, we study the problem of defending low-latency mix networks against attacks based on correlating inter-packet intervals on two or more links of the mix chain.

We investigate several attack models, including an active attack which involves adversarial modification of packet flows in order to "fingerprint" them, and analyze the tradeoffs between the amount of cover traffic, extra latency, and anonymity properties of the mix network. We demonstrate that previously proposed defenses are either ineffective, or impose a prohibitively large latency and/or bandwidth overhead on communicating applications. We propose a new defense based on adaptive padding.


Large-Scale Collection and Sanitization of Network Security Data: Risks and Challenges
With Phillip Porras.
To appear in Proc. of New Security Paradigms Workshop (NSPW), Schloß Dagstuhl, Germany, September 2006. ACM, 2006.

Abstract:
Over the last several years, there has been an emerging interest in the development of wide-area data collection and analysis centers to help identify, track, and formulate responses to the ever-growing number of coordinated attacks and malware infections that plague computer networks worldwide. As large-scale network threats continue to evolve in sophistication and extend to widely deployed applications, we expect that interest in collaborative security monitoring infrastructures will continue to grow, because such attacks may not be easily diagnosed from a single point in the network. The intent of this position paper is not to argue the necessity of Internet-scale security data sharing infrastructures, as there is ample research and operational examples that already make this case. Instead, we observe that these well-intended activities raise a unique set of risks and challenges.

We outline some of the most salient issues faced by global network security centers, survey proposed defense mechanisms, and pose several research challenges to the computer security community. We hope that this position paper will serve as a stimulus to spur groundbreaking new research in protection and analysis technologies that can facilitate the collaborative sharing of network security data while keeping data contributors safe and secure.


Efficient Anonymity-Preserving Data Collection
With Justin Brickell.
In Proc. of 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Philadelphia, PA, August 2006, pp. 76-85. ACM, 2006.

Abstract:
The output of a data mining algorithm is only as good as its inputs, and individuals are often unwilling to provide accurate data about sensitive topics such as medical history and personal finance. Individuals may be willing to share their data, but only if they are assured that it will be used in an aggregate study and that it cannot be linked back to them. Protocols for anonymity-preserving data collection provide this assurance, in the absence of trusted parties, by allowing a set of mutually distrustful respondents to anonymously contribute data to an untrusted data miner.

To effectively provide anonymity, a data collection protocol must be collusion resistant, which means that even if all dishonest respondents collude with a dishonest data miner in an attempt to learn the associations between honest respondents and their responses, they will be unable to do so. To achieve collusion resistance, previously proposed protocols for anonymity-preserving data collection have quadratically many communication rounds in the number of respondents, and employ (sometimes incorrectly) complicated cryptographic techniques such as zero-knowledge proofs.

We describe a new protocol for anonymity-preserving, collusion resistant data collection. Our protocol has linearly many communication rounds, and achieves collusion resistance without relying on zero-knowledge proofs. This makes it especially suitable for data mining scenarios with a large number of respondents.


Key Confirmation and Adaptive Corruptions in the Protocol Security Logic
With Prateek Gupta.
In Proc. of FLOC Joint Workshop on Foundations of Computer Security and Automated Reasoning for Security Protocol Analysis (FCS-ARSPA), Seattle, WA, August 2006.
Full version contains all proofs.

Abstract:
Cryptographic security for key exchange and secure session establishment protocols is often defined in the so called ``adaptive corruptions'' model. Even if the adversary corrupts one of the participants in the middle of the protocol execution and obtains the victim's secrets such as the private signing key, the victim must be able to detect this and abort the protocol. This is usually achieved by adding a key confirmation message to the protocol. Conventional symbolic methods for protocol analysis assume unforgeability of digital signatures, and thus cannot be used to reason about security in the adaptive corruptions model.

We present a symbolic protocol logic for reasoning about authentication and key confirmation in key exchange protocols. The logic is cryptographically sound: a symbolic proof of authentication and secrecy implies that the protocol is secure in the adaptive corruptions model. We illustrate our method by formally proving security of an authenticated Diffie-Hellman protocol with key confirmation.


Analysis of Probabilistic Contract Signing
With Gethin Norman.
Journal of Computer Security, vol. 14(6), pp. 561-589, 2006.

Abstract:
We present three case studies, investigating the use of probabilistic model checking to automatically analyse properties of probabilistic contract signing protocols. We use the probabilistic model checker PRISM to analyse three protocols: Rabin's probabilistic protocol for fair commitment exchange; the probabilistic contract signing protocol of Ben-Or, Goldreich, Micali, and Rivest; and a randomised protocol for signing contracts of Even, Goldreich, and Lempel. These case studies illustrate the general methodology for applying probabilistic model checking to formal verification of probabilistic security protocols.

For the Ben-Or et al. probabilistic contract signing protocol, we demonstrate the difficulty of combining fairness with timeliness. If, as required by timeliness, the judge responds to participants' messages immediately upon receiving them, then there exists a strategy for a misbehaving participant that brings the protocol to an unfair state with arbitrarily high probability, unless unusually strong assumptions are made about the quality of the communication channels between the judge and honest participants. We quantify the tradeoffs involved in the attack strategy, and discuss possible modifications of the protocol that ensure both fairness and timeliness.

For the Even et al. probabilistic contract signing protocol, we demonstrate that the responder enjoys a distinct advantage. With probability 1, the protocol reaches a state in which the responder possesses the initiator's commitment, but the initiator does not possess the responder's commitment. We then analyse several variants of the protocol, exploring the tradeoff between fairness and the number of messages that must be exchanged between participants.


Privacy-Preserving Graph Algorithms in the Semi-Honest Model
With Justin Brickell.
In Proc. of Advances in Cryptology - ASIACRYPT 2005, Chennai (Madras), India, December 2005, vol. 3788 of Lecture Notes in Computer Science, pp. 236-252. Springer, 2005.

Abstract:
We consider scenarios in which two parties, each in possession of a graph, wish to compute some algorithm on their joint graph in a privacy-preserving manner, that is, without leaking any information about their inputs except that revealed by the algorithm's output.

Working in the standard secure multi-party computation paradigm, we present new algorithms for privacy-preserving computation of APSD (all pairs shortest distance) and SSSD (single source shortest distance), as well as two new algorithms for privacy-preserving set union. Our algorithms are significantly more efficient than generic constructions. As in previous work on privacy-preserving data mining, we prove that our algorithms are secure provided the participants are honest, but curious.


Fast Dictionary Attacks on Passwords Using Time-Space Tradeoff
With Arvind Narayanan.
In Proc. of 12th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, November 2005, pp. 364-372. ACM, 2005.

Abstract:
Human-memorable passwords are a mainstay of computer security. To decrease vulnerability of passwords to brute-force dictionary attacks, many organizations enforce complicated password-creation rules and require that passwords include numerals and special characters. We demonstrate that as long as passwords remain human-memorable, they are vulnerable to "smart-dictionary" attacks even when the space of potential passwords is large.

Our first insight is that the distribution of letters in easy-to-remember passwords is likely to be similar to the distribution of letters in the users' native language. Using standard Markov modeling techniques from natural language processing, this can be used to dramatically reduce the size of the password space to be searched. Our second contribution is an algorithm for efficient enumeration of the remaining password space. This allows application of time-space tradeoff techniques, limiting memory accesses to a relatively small table of "partial dictionary" sizes and enabling a very fast dictionary attack.

We evaluated our method on a database of real-world user password hashes. Our algorithm successfully recovered 67.6% of the passwords using a 2 x 10^9 search space. This is a much higher percentage than Oechslin's "rainbow" attack, which is the fastest currently known technique for searching large keyspaces. These results call into question viability of human-memorable character-sequence passwords as an authentication mechanism.


Obfuscated Databases and Group Privacy
With Arvind Narayanan.
In Proc. of 12th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, November 2005, pp. 102-111. ACM, 2005.

Abstract:
We investigate whether it is possible to encrypt a database and then give it away in such a form that users can still access it, but only in a restricted way. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, we aim to restrict the set of queries that can be feasibly evaluated on the encrypted database.

We start with a simple form of database obfuscation which makes database records indistinguishable from lookup functions. The only feasible operation on an obfuscated record is to look up some attribute Y by supplying the value of another attribute X that appears in the same record (i.e., someone who does not know X cannot feasibly retrieve Y). We then (i) generalize our construction to conjunctions of equality tests on any attributes of the database, and (ii) achieve a new property we call group privacy. This property ensures that it is easy to retrieve individual records or small subsets of records from the encrypted database by identifying them precisely, but "mass harvesting" queries matching a large number of records are computationally infeasible.

Our constructions are non-interactive. The database is transformed in such a way that all queries except those explicitly allowed by the privacy policy become computationally infeasible, i.e., our solutions do not rely on any access-control software or hardware.


Towards Computationally Sound Symbolic Analysis of Key Exchange Protocols
With Prateek Gupta.
In Proc. of 3rd ACM Workshop on Formal Methods in Security Engineering (FMSE), Fairfax, VA, November 2005, pp. 23-32. ACM, 2005.
Full version contains all proofs.

Abstract:
We present a cryptographically sound formal method for proving correctness of key exchange protocols. Our main tool is a fragment of a symbolic protocol logic. We demonstrate that proofs of key agreement and key secrecy in this logic imply simulatability in Shoup's secure multi-party framework for key exchange. As part of the logic, we present cryptographically sound abstractions of CMA-secure digital signatures and a restricted form of Diffie-Hellman exponentiation, which is a technical result of independent interest. We illustrate our method by constructing a proof of security for a simple authenticated Diffie-Hellman protocol.


Probabilistic Polynomial-Time Semantics for a Protocol Security Logic
With Anupam Datta, Ante Derek, John C. Mitchell, and Mathieu Turuani.
In Proc. of 32nd International Colloquium on Automata, Languages and Programming (ICALP), Lisbon, Portugal, July 2005, vol. 3580 of Lecture Notes in Computer Science, pp. 16-29. Springer, 2005.

Abstract:
We describe a cryptographically sound formal logic for proving protocol security properties without explicitly reasoning about probability, asymptotic complexity, or the actions of a malicious attacker. The approach rests on a new probabilistic, polynomial-time semantics for an existing protocol security logic, replacing an earlier semantics that uses nondeterministic symbolic evaluation. While the basic form of the protocol logic remains unchanged from previous work, there are some interesting technical problems involving the difference between efficiently recognizing and efficiently producing a value, and involving a reinterpretation of standard logical connectives that seems necessary to support certain forms of reasoning.


Game-Based Analysis of Denial-of-Service Prevention Protocols
With Ajay Mahimkar.
In Proc. of 18th IEEE Computer Security Foundations Workshop (CSFW), Aix-en-Provence, France, June 2005, pp. 287-301. IEEE Computer Society, 2005.
The paper in the CSFW proceedings contains a bug in the verification conditions. This is the corrected version.

Abstract:
Availability is a critical issue in modern distributed systems. While many techniques and protocols for preventing denial of service (DoS) attacks have been proposed and deployed in recent years, formal methods for analyzing and proving them correct have not kept up with the state of the art in DoS prevention. This paper proposes a new protocol for preventing malicious bandwidth consumption, and demonstrates how game-based formal methods can be successfully used to verify availability-related security properties of network protocols.

We describe two classes of DoS attacks aimed at bandwidth consumption and resource exhaustion, respectively. We then propose our own protocol, based on a variant of client puzzles, to defend against bandwidth consumption, and use the JFKr key exchange protocol as an example of a protocol that defends against resource exhaustion attacks. We specify both protocols as alternating transition systems (ATS), state their security properties in alternating-time temporal logic (ATL) and verify them using MOCHA, a model checker that has been previously used to analyze fair exchange protocols.


Probabilistic Escrow of Financial Transactions with Cumulative Threshold Disclosure
With Stanisław Jarecki.
In Proc. of 9th International Conference on Financial Cryptography and Data Security, Roseau, Dominica, March 2005, vol. 3570 of Lecture Notes in Computer Science, pp. 172-187. Springer, 2005.

Abstract:
We propose a scheme for privacy-preserving escrow of financial transactions. The objective of the scheme is to preserve privacy and anonymity of the individual user engaging in financial transactions until the cumulative amount of all transactions in a certain category, for example all transactions with a particular counterparty in any single month, reaches a pre-specified threshold. When the threshold is reached, the escrow agency automatically gains the ability to decrypt the escrows of all transactions in that category (and only that category).

Our scheme employs the probabilistic polling idea of Jarecki and Odlyzko [JO97], amended by a novel robustness mechanism which makes such scheme secure for malicious parties. When submitting the escrow of a transaction, with probability that is proportional to the amount of the transaction, the user reveals a share of the key under which all his transactions are encrypted. Therefore, the fraction of shares that are known to the escrow agency is an accurate approximation of the fraction of the threshold amount that has been transacted so far. When the threshold is reached, with high probability the escrow agency possesses all the shares that it needs to reconstruct the key and open the escrows. Our main technical contribution is a novel tool of robust probabilistic information transfer, which we implement using techniques of optimistic fair 2-party computation.


Symbolic Protocol Analysis with an Abelian Group Operator or Diffie-Hellman Exponentiation
With Jonathan Millen.
Journal of Computer Security, special issue on selected papers of CSFW-16 (ed. Riccardo Focardi), vol. 13(3), pp. 515-564, 2005.
Warning: Contains serious bugs in the proofs; corrections coming eventually.

Abstract:
We demonstrate that for any well-defined cryptographic protocol, the symbolic trace reachability problem in the presence of an Abelian group operator (eg, multiplication) can be reduced to solvability of a decidable system of quadratic Diophantine equations. This result enables complete, fully automated formal analysis of protocols that employ primitives such as Diffie-Hellman exponentiation, multiplication, and XOR, with a bounded number of role instances, but without imposing any bounds on the size of terms created by the attacker.


Reputation-Based Trust Management
With Carolyn Talcott.
Journal of Computer Security, special issue on selected papers of WITS 2003 (ed. Roberto Gorrieri), vol. 13(1), pp. 167-190, 2005.

Abstract:
We propose a formal model for reputation-based trust management. In contrast to credential-based trust management, in our framework an agent's reputation serves as the basis for trust. For example, an access control policy may consider the agent's reputation when deciding whether to offer him a license for accessing a protected resource. The underlying semantic model is an event semantics inspired by the actor model, and assumes that each agent has only partial knowledge of the events that have occurred. Restrictions on agents' behavior are formalized as licenses, with ``good'' and ``bad'' behavior interpreted as, respectively, license fulfillment and violation. An agent's reputation comprises four kinds of evidence: completely fulfilled licenses, ongoing licenses without violations or misuses, licenses with violated obligations, and misused licenses. This approach enables precise formal modeling of scenarios involving reputations, such as financial transactions based on credit histories and information sharing between untrusted agents.


Privacy-Preserving Sharing and Correlation of Security Alerts
With Patrick Lincoln and Phillip Porras.
In Proc. of 13th USENIX Security Symposium, San Diego, CA, August 2004, pp. 239-254. USENIX, 2004.

Abstract:
We present a practical scheme for Internet-scale collaborative analysis of information security threats which provides strong privacy guarantees to contributors of alerts. Wide-area analysis centers are proving a valuable early warning service against worms, viruses, and other malicious activities. At the same time, protecting individual and organizational privacy is no longer optional in today's business climate. We propose a set of data sanitization techniques that enable community alert aggregation and correlation, while maintaining privacy for alert contributors. Our approach is practical, scalable, does not rely on trusted third parties or secure multiparty computation schemes, and does not require sophisticated key management.


Synchronous Batching: From Cascades to Free Routes
With Roger Dingledine and Paul Syverson.
In Proc. of 4th Workshop on Privacy Enhancing Technologies (PET), Toronto, Canada, May 2004, vol. 3424 of Lecture Notes in Computer Science, pp. 186-206. Springer, 2005.

Abstract:
The variety of possible anonymity network topologies has spurred much debate in recent years. In a synchronous batching design, each batch of messages enters the mix network together, and the messages proceed in lockstep through the network. We show that a synchronous batching strategy can be used in various topologies, including a free-route network, in which senders choose paths freely, and a cascade network, in which senders choose from a set of fixed paths. We show that free-route topologies can provide better anonymity as well as better message reliability in the event of partial network failure.


Handcuffing Big Brother: An Abuse-Resilient Transaction Escrow Scheme
With Stanisław Jarecki.
In Proc. of Advances in Cryptology - EUROCRYPT 2004, Interlaken, Switzerland, May 2004, vol. 3207 of Lecture Notes in Computer Science, pp. 590-608. Springer, 2004.

Abstract:
We propose a practical abuse-resilient transaction escrow scheme with applications to privacy-preserving audit and monitoring of electronic transactions. Our scheme ensures correctness of escrows as long as at least one of the participating parties is honest, and it ensures privacy and anonymity of transactions even if the escrow agent is corrupt or malicious. The escrowed information is secret and anonymous, but the escrow agent can efficiently find transactions involving some user in response to a subpoena or a search warrant. Moreover, for applications such as abuse-resilient monitoring of unusually high levels of certain transactions, the escrow agent can identify escrows with particular common characteristics and automatically (ie, without a subpoena) open them once their number has reached a pre-specified threshold.

Our solution for transaction escrow is based on the use of Verifiable Random Functions. We show that by tagging the entries in the escrow database using VRFs indexed by users' private keys, we can protect users' anonymity while enabling efficient and, optionally, automatic de-escrow of these entries. We give a practical instantiation of a transaction escrow scheme utilizing a simple and efficient VRF family secure under the DDH assumption in the Random Oracle Model.


Unifying Equivalence-Based Definitions of Protocol Security
With Anupam Datta, Ralf Küsters, John C. Mitchell, and Ajith Ramanathan.
In Proc. of Workshop on Issues in the Theory of Security (WITS), Barcelona, Spain, April 2004, pp. 189-204.

Abstract:
Several related research efforts have led to three different ways of specifying protocol security properties by simulation or equivalence. Abstracting the specification conditions away from the computational frameworks in which they have been previously applied, we show that when asynchronous communication is used, universal composability, black-box simulatability, and process equivalence express the same properties of a protocol. Further, the equivalence between these conditions holds for any computational framework, such as process calculus, that satisfies certain structural properties. Similar but slightly weaker results are achieved for synchronous communication.


Decidable Analysis of Cryptographic Protocols with Products and Modular Exponentiation
In Proc. of 13th European Symposium on Programming (ESOP), Barcelona, Spain, March 2004, vol. 2986 of Lecture Notes in Computer Science, pp. 355-369. Springer, 2004.

Abstract:
We demonstrate that the symbolic trace reachability problem for cryptographic protocols is decidable in the presence of an Abelian group operator and modular exponentiation from arbitrary bases. We represent the problem as a sequence of symbolic inference constraints and reduce it to a system of linear Diophantine equations. For a finite number of protocol sessions, this result enables fully automated, sound and complete analysis of protocols that employ primitives such as Diffie-Hellman exponentiation and modular multiplication without imposing any bounds on the size of terms created by the attacker, but taking into account the relevant algebraic properties.


Probabilistic Model Checking of an Anonymity System
Journal of Computer Security, special issue on selected papers of CSFW-15 (ed. Steve Schneider), vol. 12(3/4), pp. 355-377, 2004.

Abstract:
We use the probabilistic model checker PRISM to analyze the Crowds system for anonymous Web browsing. This case study demonstrates how probabilistic model checking techniques can be used to formally analyze security properties of a peer-to-peer group communication system based on random message routing among members. The behavior of group members and the adversary is modeled as a discrete-time Markov chain, and the desired security properties are expressed as PCTL formulas. The PRISM model checker is used to perform automated analysis of the system and verify anonymity guarantees it provides. Our main result is a demonstration of how certain forms of probabilistic anonymity degrade when group size increases or random routing paths are rebuilt, assuming that the corrupt group members are able to identify and/or correlate multiple routing paths originating from the same sender.


Information Hiding, Anonymity and Privacy: A Modular Approach
With Dominic Hughes.
Journal of Computer Security, special issue on selected papers of WITS 2002 (ed. Joshua Guttman), vol. 12(1), pp. 3-36, 2004.

Abstract:
We propose a new specification framework for information hiding properties such as anonymity and privacy. The framework is based on the concept of a function view, which is a concise representation of the attacker's partial knowledge about a function. We describe system behavior as a set of functions, and formalize different information hiding properties in terms of views of these functions. We present an extensive case study, in which we use the function view framework to systematically classify and rigorously define a rich domain of identity-related properties, and to demonstrate that privacy and anonymity are independent.

The key feature of our approach is its modularity. It yields precise, formal specifications of information hiding properties for any protocol formalism and any choice of the attacker model as long as the latter induce an observational equivalence relation on protocol instances. In particular, specifications based on function views are suitable for any cryptographic process calculus that defines some form of indistinguishability between processes. Our definitions of information hiding properties take into account any feature of the security model, including probabilities, random number generation, timing, etc., to the extent that it is accounted for by the formalism in which the system is specified.


Contract Signing, Optimism, and Advantage
With Rohit Chadha, John C. Mitchell, and Andre Scedrov.
In Proc. of 14th International Conference on Concurrency Theory (CONCUR), Marseille, France, September 2003, vol. 2761 of Lecture Notes in Computer Science, pp. 366-382. Springer, 2003.
The journal version containing all proofs was published in Journal of Logic and Algebraic Programming, special issue on Processes and Security (ed. Roberto Amadio), vol. 64(2), pp. 189-218, 2005.

Abstract:
A contract signing protocol lets two parties exchange digital signatures on a pre-agreed text. Optimistic contract signing protocols enable the signers to do so without invoking a trusted third party. However, an adjudicating third party remains available should one or both signers seek timely resolution. We analyze optimistic contract signing protocols using a game-theoretic approach and prove a fundamental impossibility result: in any fair, optimistic, timely protocol, an optimistic player yields an advantage to the opponent. The proof relies on a careful characterization of optimistic play that postpones communication to the third party. Since advantage cannot be completely eliminated from optimistic protocols, we argue that the strongest property attainable is the absence of provable advantage, i.e., abuse-freeness in the sense of Garay-Jakobsson-MacKenzie.


Intruder Deductions, Constraint Solving and Insecurity Decision in Presence of Exclusive or
With Hubert Comon-Lundh.
In Proc. of 18th Annual IEEE Symposium on Logic in Computer Science (LICS), Ottawa, Canada, June 2003, pp. 271-280. IEEE Computer Society, 2003.

Abstract:
We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to XOR and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as XOR, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of XOR and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the XOR case: we consider a symbolic constraint system expressing the reachability (eg, secrecy) problem for a finite number of sessions. We prove that such constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing XOR for a fixed number of sessions.


Negotiated Privacy (extended abstract)
With Stanisław Jarecki and Patrick Lincoln.
In Proc. of International Symposium on Software Security, Tokyo, Japan, November 2002, vol. 2609 of Lecture Notes in Computer Science, pp. 96-111. Springer, 2003.

Abstract:
Exponential growth in digital information gathering, storage, and processing capabilities inexorably leads to conflict between well-intentioned government or commercial datamining, and fundamental privacy interests of individuals and organizations. This paper proposes a mechanism that provides cryptographic fetters on the mining of personal data, enabling efficient mining of previously-negotiated properties, but preventing any other uses of the protected personal data. Our approach does not rely on complete trust in the analysts to use the data appropriately, nor does it rely on incorruptible escrow agents. Instead, we propose conditional data escrow where the data generators, not the analysts, hold the keys to the data, but analysts can verify that the pre-negotiated queries are enabled. Our solution relies on verifiable, anonymous, and deterministic commitments which play the role of tags that mark encrypted entries in the analyst's database. The database owner cannot learn anything from the encrypted entries, or even verify his guess of the plaintext on which these entries are based. On the other hand, the verifiable and deterministic property ensures that the entries are marked with consistent tags, so that the database manager learns when the number of entries required to enable some query reaches the pre-negotiated threshold.


Is it possible to decide whether a cryptographic protocol is secure or not?
With Hubert Comon.
Journal of Telecommunications and Information Technology, special issue on Cryptographic protocol verification (ed. Jean Goubault-Larrecq), vol. 4, pp. 5-15, 2002.

Abstract:
We consider the so called "cryptographic protocols" whose aim is to ensure some security properties when communication channels are not reliable. Such protocols usually rely on cryptographic primitives. Even if it is assumed that the cryptographic primitives are perfect, the security goals may not be achieved: the protocol itself may have weaknesses which can be exploited by an attacker. We survey recent work on decision techniques for the cryptographic protocol analysis.


Finite-State Analysis of Two Contract Signing Protocols
With John C. Mitchell.
Theoretical Computer Science, special issue on Theoretical Foundations of Security Analysis and Design (ed. Roberto Gorrieri), vol. 283(2), pp. 419-450, 2002.

Abstract:
Optimistic contract signing protocols allow two parties to commit to a previously agreed upon contract, relying on a third party to abort or confirm the contract if needed. These protocols are relatively subtle, since there may be interactions between the subprotocols used for normal signing without the third party, aborting the protocol through the third party, or requesting confirmation from the third party. With the help of Murphi, a finite-state verification tool, we analyze two related contract signing protocols: the optimistic contract signing protocol of Asokan, Shoup, and Waidner, and the abuse-free contract signing protocol of Garay, Jakobsson, and MacKenzie. For the first protocol, we discover that a malicious participant can produce inconsistent versions of the contract or mount a replay attack. For the second protocol, we discover that negligence or corruption of the trusted third party may allow abuse or unfairness. In this case, contrary to the intent of the protocol, the cheated party is not able to hold the third party accountable. We present and analyze modifications to the protocols that avoid these problems and discuss the basic challenges involved in formal analysis of fair exchange protocols.


Constraint Solving for Bounded-Process Cryptographic Protocol Analysis
With Jonathan Millen.
In Proc. of 8th ACM Conference on Computer and Communications Security (CCS), Philadelphia, PA, November 2001, pp. 166-175. ACM, 2001.

Abstract:
The reachability problem for cryptographic protocols with non-atomic keys can be solved via a simple constraint satisfaction procedure.


A Core Calculus of Classes and Mixins
With Viviana Bono and Amit Patel.
In Proc. of 13th European Conference on Object-Oriented Programming (ECOOP), Lisbon, Portugal, June 1999, vol. 1628 of Lecture Notes in Computer Science, pp. 43-66. Springer, 1999.

Abstract:
We develop an imperative calculus that provides a formal model for both single and mixin inheritance. By introducing classes and mixins as the basic object-oriented constructs in a lambda-calculus with records and references, we obtain a system with an intuitive operational semantics. New classes are produced by applying mixins to superclasses. Objects are represented by records and produced by instantiating classes. The type system for objects uses only functional, record, and reference types, and there is a clean separation between subtyping and inheritance.


A Core Calculus of Classes and Objects
With Viviana Bono, Amit Patel, and John C. Mitchell.
In Proc. of 15th Annual Conference on Mathematical Foundations of Programming Semantics (MFPS), vol. 20 of Electronic Notes in Theoretical Computer Science, New Orleans, LA, April 1999, pp. 43-64.

Abstract:
We present an imperative calculus for a class-based language. By introducing classes as the basic object-oriented construct in a lambda-calculus with records and references, we obtain a system with an intuitive operational semantics. Objects are instantiated from classes and represented by records. The type system for objects uses only functional, record, and reference types, and there is a clean separation between subtyping and inheritance. We demonstrate that the calculus is sound and sufficiently expressive to model advanced language features such as inheritance with method redefinition, multi-level encapsulation, and modular object construction.


Efficient Finite-State Analysis for Large Security Protocols
With Ulrich Stern.
In Proc. of 11th IEEE Computer Security Foundations Workshop (CSFW), Rockport, MA, June 1998, pp. 106-115. IEEE Computer Society, 1998.

Abstract:
We describe two state reduction techniques for finite-state models of security protocols. The techniques exploit certain protocol properties that we have identified as characteristic of security protocols. We prove the soundness of the techniques by demonstrating that any violation of protocol invariants is preserved in the reduced state graph. In addition, we describe an optimization method for evaluating parameterized rule conditions, which are common in our models of security protocols. All three techniques have been implemented in the Murphi verifier.


Finite-State Analysis of SSL 3.0
With John C. Mitchell and Ulrich Stern.
In Proc. of 7th USENIX Security Symposium, San Antonio, TX, January 1998, pp. 201-215. USENIX, 1998.

Abstract:
The Secure Sockets Layer (SSL) protocol is analyzed using a finite-state enumeration tool called Murphi. The analysis is presented using a sequence of incremental approximations to the SSL 3.0 handshake protocol. Each simplified protocol is "model-checked" using Murphi, with the next protocol in the sequence obtained by correcting errors that Murphi finds automatically. This process identifies the main shortcomings in SSL 2.0 that led to the design of SSL 3.0, as well as a few anomalies in the protocol that is used to resume a session in SSL 3.0. In addition to some insight into SSL, this study demonstrates the feasibility of using formal methods to analyze commercial protocols.