Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Ross Anderson's Home Page
A study
of security economics in Europe has been published by the European
Network and Information Security Agency; you have until May 30 to give them and
us your feedback on it. Security
economics has become a thriving field since the turn of the century, and in
this report we make a first cut at applying it in a coherent way to the most
pressing security policy problems.
Many of my papers are available in html and/or pdf, but some of the older
technical ones are in postscript, for which you can download a viewer here. By default, when I post a paper here
I license it under the relevant Creative Commons
license, so you may redistribute it but not modify it. I may subsequently
assign the residual copyright to an academic publisher.
Over the last few years, it's become clear that many systems fail not for
technical reasons so much as from misplaced incentives - often the people who
could protect them are not the people who suffer the costs of failure. There
are also many questions with an economic dimension as well as a technical
one. For example, will digital signatures make electronic commerce more secure?
Is so-called `trusted computing' a good idea, or just another way for Microsoft
to make money? And what about all the press stories about `Internet hacking' -
is this threat serious, or is it mostly just scaremongering by equipment
vendors? It's not enough for security engineers to understand ciphers; we have
to understand incentives as well. This has led to a rapidly growing interest in
`security economics', a discipline which I helped to found. I maintain the Economics and Security
Resource Page, and my research contributions include the following.
Just after WEIS 2008, we organized the world's
first Workshop on Security and Human Behaviour (SHB08). This brought security
engineers together with psychologists, behavioral economists and others
interested in expanding security economics into the behavioral sciences and the
humanities generally. I have a live
blog of the event, and here are the papers; you can also get audio of most of the
sessions.
We did a study of security economics in the Single Market for the European Network
and Information Security Agency. This looks at the market failures underlying
spam, phishing and other online problems, and draws on what we've learned about
security economics to make concrete policy proposals. We believe it's the first
attempt to do this in a systematic way across the whole of information security
policy.
Closing the
Phishing Hole - Fraud, Risk and Nonbanks reports research commissioned by
the US Federal Reserve for their biennal Santa
Fe Conference on bank regulation. This paper identified speedy asset
recovery as the most effective deterrent to online fraud, which is made easier
by systems like Western Union that make the recovery of stolen funds more
difficult.
The
topology of covert conflict asks how the police can best target an
underground organisation given some knowledge of its patterns of communication,
and how might they in turn might react to various law-enforcement
strategies. We present a framework combining ideas from network analysis and
evolutionary game theory to explore the interaction of attack and defence
strategies in networks. Although we started out thinking about computer
viruses, our work suggests explanations of a number of aspects of modern
conflict generally.
Why Information
Security is Hard - An Economic Perspective was the paper that got
information security people thinking about the subject. It applies economic
analysis to explain many phenomena that security people had found to be
pervasive but perplexing. Why do mass-market software products such as Windows
contain so many security bugs? Why are their security mechanisms so difficult
to manage? Why are government evaluation schemes, such as the Orange Book and
the Common Criteria, so bad?
In the first part of my Toulouse paper, I
show that the usual argument about open source security - whether source access
makes it easier for the
defenders to find and fix bugs, or makes it easier for the attackers
to find and exploit them - is misdirected. Under standard assumptions used by
the reliability growth modelling community, the two will exactly cancel each
other out. That means that whether open or closed systems are more secure in a
given situation will depend on whether, and how, the application deviates from
the standard assumptions. The ways in which this can happen, and either open or
closed be better in some specific application, are explored in a later paper,
Open and
Closed Systems are Equivalent (that is, in an ideal world) which appeared
as a chapter in Perspectives
on Free and Open Source Software. See press coverage in slashdot, news.com and The
Register.
On Dealing with
Adversaries Fairly applies election theory (also known as social choice
theory) to the problem of shared control in distributed systems. It shows how a
number of reputation systems proposed for use in peer-to-peer applications
might be improved. It appeared at WEIS 2004.
The Economics
of Censorship Resistance examines when it is better for defenders to
aggregate or disperse. Should file-sharers build one huge system like gnutella
and hope for safety in numbers, or would a loose federation of fan clubs for
different bands work better? More generally, what are the tradeoffs between
diversity and solidarity when conflict threatens? (This is a live topic in
social policy at the moment - see David
Goodhart's essay, and a response in the Economist.)
This paper also appeared at WEIS
2004.
Since about the middle of 2000, there has been an explosion of interest in
peer-to-peer networking - the business of building useful systems out of large
numbers of intermittently connected machines, with virtual infrastructures that
are tailored to the application. One of the seminal papers in the field was The Eternity
Service, which I presented at Pragocrypt 96. I had been alarmed by the
Scientologists' success at closing down the penet remailer
in Finland, and had been personally threatened by bank lawyers who wanted to
suppress knowledge of the vulnerabilities of ATM systems (see here
for a later incident). This taught me that electronic publications can be easy
for the rich and the ruthless to suppress. They are usually kept on just a few
servers, whose owners can be sued or coerced. To me, this seemed uncomfortably
like books in the Dark Ages: the modern era only started once the printing
press enabled seditious thoughts to be spread too widely to ban. The Eternity
Service was conceived as a means of putting electronic documents as far outwith
the censor's grasp as possible. (My
original fear has since materialised; a UK
court judgment has found that a newspaper's online archives can be altered
by order of a court to remove a libel.)
But history never repeats itself exactly, and the real fulcrum of censorship in
cyberspace turned out to be not sedition, or vulnerability disclosure, or even
pornography, but copyright. Hollywood's action against Napster led to my Eternity
Service ideas being adopted by many systems including Publius and Freenet. Many of these developments
were described in an important book,
and the first
academic conference on peer-to-peer systems was held in March 2002 at MIT.
The field has since become very active. See also Richard Stallman's classic, The Right to Read.
My contributions since the Eternity paper include:
New Strategies for
Revocation in Ad-Hoc Networks won the best paper award at ESAS07. In it we show how to
use suicide bombing for revocation in networks. Suicide attacks are found
widely in nature, from bees to helper T-cells; this model may help explain why
(press coverage here
and here).
I worked on the security of Homeplug, an industry standard for broadband
communication over the power mains. A paper
on what we did and why appeared at SOUPS 2006. This is a
good worked example of how to do key establishment in a real peer-to-peer
system. The core problem is this: how can you be sure you're recruiting the
right device to your network, rather than a similar one nearby?
Sybil-resistant DHT
routing appeared at ESORICS 2005 and showed how we can
make peer-to-peer systems more robust against disrutpive attacks if we know
which nodes introduced which other nodes. The convergence of computer science
and social network theory is an interesting recent phenomenon, and not limited
to search and recommender systems.
Key
Infection - Smart trust for Smart Dust appeared at ICNP 2004 and presents a radically
new approach to key management in sensor and peer-to-peer networks. Peers
establish keys opportunistically and use resilience mechanisms to fortify
the system against later node compromise. This work challenges the old
assumption that authentication is largely a bootstrapping problem.
The Economics of
Censorship Resistance examines when it is better for defenders to aggregate
or disperse. Should file-sharers build one huge system like gnutella and hope
for safety in numbers, or would a loose federation of fan clubs for different
bands work better?
A New Family of
Authentication Protocols presented our `Guy Fawkes Protocol', enabled users
to sign messages using only two computations of a hash function and one
reference to a timestamping service. It led to many protocols for signing
digital streams; the paper also raised foundational questions about the
definition of a digital signature.
The Cocaine
Auction Protocol explored how transactions can be conducted between
mutually mistrustful principals with no trusted arbitrator, while giving a high
degree of privacy against traffic analysis.
I ran a CMI project with Frans Kaashoek and Robert Morris on building a
next-generation peer-to-peer system. I gave a keynote talk about this at the Wizards of OS
conference in Berlin; the slides are here.
Many security system failures are due to poorly designed protocols, and this
has been a Cambridge interest for many years. Some relevant papers follow.
API Level
Attacks on Embedded Systems are an extremely powerful way to attack
cryptographic processors, and indeed any systems where more trusted systems
talk to less trusted ones. The idea is that a `secure' device can often be
defeated by sending it some sequence of transactions which causes it to break
its security policy. We've defeated pretty well every security processor we've
looked at, at least once. This line of research originated at Protocols 2000
with my paper The Correctness of
Crypto Transaction Sets; more followed in my book.Robbing the bank
with a theorem prover, shows how to apply advanced tools to the problem,
and ideas for future research can be found in Protocol
Analysis, Composability and Computation. There is a snapshot of the state
of the art in late 2005 in a survey of
cryptographic processors, a shortened version of which appeared in the
February 2006 Proceedings of the IEEE.
Programming
Satan's Computer is a phrase Roger Needham and I coined to express
the difficulty of designing cryptographic protocols; it has recently been
popularised by Bruce Schneier (see, for example, his foreword to my book). The problem of
designing programs which run robustly on a network containing a malicious
adversary is rather like trying to program a computer which gives subtly wrong
answers at the worst possible moments.
Robustness p
rinciples for public key protocols gives a number of attacks on protocols
based on public key primitives. It also puts forward some principles which can
help us to design robust protocols, and to find attacks on other people's
designs. It appeared at Crypto 95.
The Cocaine
Auction Protocol explores how transactions can be conducted between
mutually mistrustful principals with no trusted arbitrator, even in
environments where anonymous communications make most of the principals
untraceable.
NetCard - A
Practical Electronic Cash Scheme presents research on micropayment
protocols for use in electronic commerce. We invented tick payments
simultaneously with Torben Pedersen and with Ron Rivest and Adi Shamir; we all
presented our work at Protocols 96.
The GCHQ
Protocol and its Problems pointed out a number of flaws in a key management
protocol promoted by GCHQ as a European alternative to Clipper, until we shot
it down with this paper at Eurocrypt 97. Many of the criticisms we developed
here also apply to the more recent, pairing-based cryptosystems.
The Formal
Verification of a Payment System describes the first use of formal methods
to verify an actual payment protocol, which was (and still is) used in an
electronic purse product (VISA's COPAC card). This is a teaching example I use
to get the ideas of the BAN logic across to undergraduates. There is further
detailed information in a technical
report, which combines papers given at ESORICS 92 and Cardis 94.
On Fortifying
Key Negotiation Schemes with Poorly Chosen Passwords presents a simple way
of achieving the same result as protocols such as EKE, namely preventing
middleperson attacks on Diffie-Hellman key exchange between two people whose
shared secret could be guessed by the enemy.
Protocols have been the stuff of high drama. Citibank asked the High Court to
gag the
disclsoure of certain crypto API
vulnerabilities that affect a number of systems used in banking. I wrote to
the judge opposing
this; a gagging
order was still imposed, although in slightly less severe terms than
Citibank had requested. The trial was in camera, the banks' witnesses didn't
have to answer questions about vulnerabilities, and new information revealed
about these vulnerabilities in the course of the trial may not be disclosed in
England or Wales. Information already in the public
domain was unaffected. The vulnerabilities were discovered by Mike Bond and me while acting as the
defence experts in a phantom withdrawal court case, and independently
discovered by the other side's expert, Jolyon Clulow, who later joined
us as a research student. They are of significant scientific interest, as well as being
relevant to the rights of the growing number of people who suffer phantom withdrawals from their
bank accounts worldwide. Undermining the fairness of trials and forbidding
discussion of vulnerabilities isn't the way forward. See press coverage by the
New
Scientist, the Register,
Slashdot,
news.com, and
Zdnet.
I have been interested for many years in how security systems fail in real
life. This is a prerequisite for building robust secure systems; many security
designs are poor because they are based on unrealistic threat models. This work
began with a study of automatic teller machine fraud, and expanded to other
applications as well. It now provides the central theme of my book.
Why Cryptosystems
Fail may have been cited more than anything else I've written. This version
appeared at ACMCCS 93 and explains how ATM fraud was done in the early 1990s.
Liability and
Computer Security - Nine Principles took this work further, and examines
the problems with relying on cryptographic evidence. The recent introduction of
EMV ('chip and PIN') was supposed to fix the problem, but hasn't: Phish and
Chips documents protocol weaknesses in EMV, and A Note on EMV Secure
Messaging in the IBM 4758 CCA documents even more. The
Man-in-the-Middle Defence shows how to turn protocol weaknesses to
advantage. For example, a bank customer can take an electronic attorney along
to a chip-and-PIN transaction to help ensure that neither the bank nor the
merchant rips him off. Recently, the banks have started moving to RFID cards
for small payments: see my paper RFID and the
Middleman for what happens next. (I also have a talk
on phishing.)
On a New Way to
Read Data from Memory describes techniques we developed that use lasers to
read out memory contents directly from a chip, without using the read-out
circuits provided by the vendor, in order to defeat access controls and even
recover data from damaged devices. Collaborators at Louvain have developed ways
to do this using electromagnetic induction, which are also described. The work
builds on methods described in an earlier paper, on Optical Fault
Induction Attacks, which showed how laser pulses could be used to induce
faults in smartcards that would leak secret information. We can write
arbitrary values into registers or memory, reset protection bits, break out of
loops, and cause all sorts of mayhem. That paper made the front page of the New York
Times; it also got covered by the New
Scientist and slashdot. It was presented at CHES
2002.
After we discovered the above attacks, we developed a new, more secure, CPU
technology which uses redundant failure-evident logic to thwart attacks based
on fault induction or power analysis. Our first paper on this
technology won the best presentation award in April at Async 2002. Our
journal paper, Balanced
Self-Checking Asynchronous Logic for Smart Card Applications, has details
and test results.
On the Security
of Digital Tachographs looks at the techniques used to manipulate the
tachographs that are used in Europe to police truck and bus drivers' hours. It
successfully predicted how the introduction of smartcard-based digital
tachographs throughout Europe from 2005 would affect fraud and tampering.
How to Cheat
at the Lottery reports a novel and, I hope, entertaining experiment in
software requirements engineering.
The Grenade
Timer describes a novel way to protect low-cost processors against
denial-of-service attacks, by limiting the number of processing cycles an
application program can consume.
The Millennium
Bug - Reasons Not to Panic describes our experience in coping with the bug
at Cambridge University and elsewhere. This paper correctly predicted that the
bug wouldn't bite very hard. (Journalists were not interested, despite a major
press
release by the University.)
The Memorability
and Security of Passwords -- Some Empirical Results tackles an old problem
- how do you train users to choose passwords that are easy to remember but hard
to guess? There's a lot of `folk wisdom' on this subject but little that would
pass muster by the standards of applied psychology. So we did a randomized
controlled trial with a few hundred first year science students. While we
confirmed some common beliefs, we debunked some others. This has become one of
the classic papers on security usability.
Murphy's law,
the fitness of evolving species, and the limits of software reliability
shows how we can apply the techniques of statistical thermodynamics to the
failure modes of any complex logical system that evolves under testing. It
provides a common mathematical model for the reliability growth of complex
computer systems and for biological evolution. Its findings are in close
agreement with empirical data. This paper inspired later
work in security economics.
Security
Policies play a central role in secure systems engineering. They provide a
concise statement of the kind of protection a system is supposed to achieve.
This article is a security policy tutorial.
Combining
cryptography with biometrics shows that in those applications where you can
benefit from biometrics, you often don't need a large central database (as
proposed in the ID card Bill). There are
smarter, more resilient, and less privacy-invasive ways to arrange things.
Reports
of an attack on the hash function SHA have made Tiger, which Eli Biham and I designed in
1995, a popular choice of cryptographic hash function. I also worked with Eli,
and with Lars Knudsen, to develop Serpent - a candidate
block cipher for the Advanced Encryption
Standard. Serpent won through to the final of the competition and got the
second largest number of votes. Another of my contributions was founding the
series of workshops on Fast Software
Encryption.
Other papers on cryptography and cryptanalysis include the following.
The Dancing Bear - A
New Way of Composing Ciphers presents a new way to combine crypto
primitives. Previously, to decrypt using (say) any three out of five keys, the
keys all had to be of the same type (such as RSA keys). With my new
construction, you can mix and match - RSA, AES, even one-time pad. The paper
appeared at the 2004 Protocols Workshop; an earlier version came out at the FSE 2004 rump session.
Two Remarks on
Public Key Cryptology is a note on two ideas I floated at talks I gave in
1997-98, concerning forward-secure signatures and compatible weak keys. The
first of these has inspired later research by others; the second gives a new
attack on public key encryption.
Two
Practical and Provably Secure Block Ciphers: BEAR and LION shows how to
construct a block cipher from a stream cipher and a hash function. We had
already known how to construct stream ciphers and hash functions from block
ciphers, and hash functions from stream ciphers; so this paper completed the
set of elementary reductions. It also led to the `Dancing Bear' above.
Tiger - A
Fast New Hash Function defines a new hash function, which we designed
following Hans Dobbertin's attack on MD4. This was designed to run extremely
fast on the new 64-bit processors such as DEC Alpha and IA64, while still
running reasonably quickly on existing hardware such as Intel 80486 and
Pentium (the above link is to the Tiger home page, maintained in Haifa by Eli
Biham; if the network is slow, see my UK mirrors of the Tiger paper, new and old reference
implementations (the change fixes a padding bug) and S-box generation
documents. There are also third-party crypto toolkits supporting Tiger,
such as that from Bouncy Castle).
Minding your
p's and q's points out a number of things that can go wrong with the choice
of modulus and generator in public key systems based on discrete log. It
elucidated some of the previously classified reasoning behind the design of the
US Digital Signature Algorithm, and appeared at Asiacrypt 96.
Chameleon -
A New Kind of Stream Cipher shows how to do traitor tracing using symmetric
rather than public-key cryptology. The idea is to turn a stream cipher into one
with reduced key diffusion, but without compromising security. A single
broadcast ciphertext is decrypted to slightly different plaintexts by users
with slightly different keys. This paper appeared at Fast Software Encryption
in Haifa in January 1997.
Searching
for the Optimum Correlation Attack shows that nonlinear combining functions
used in nonlinear filter generators can react with shifted copies of themselves
in a way that opens up a new and powerful attack on many cipher systems. It
appeared at the second workshop on fast software encryption.
The Classification of
Hash Functions showed that correlation freedom is strictly stronger than
collision freedom, and shows that there are many pseudorandomness properties
other than collision freedom which hash functions may need. It appeared at
Cryptography and Coding 93.
A Faster Attack
on Certain Stream Ciphers shows how to break the multiplex shift register
generator, which is used in satellite TV systems. I found a simple
divide-and-conquer attack on this system in the mid 1980's, a discovery that
got me `hooked' on cryptology. This paper is a refinement of that work.
On Fibonacci
Keystream Generators appeared at FSE3, and shows how to break `FISH', a
stream cipher proposed by Siemens. It also proposes an improved cipher, `PIKE',
based on the same general mechanisms.
From the mid- to late-1990s, I did a lot of work on information hiding.
Soft Tempest: Hidden
Data Transmission Using Electromagnetic Emanations must be one of the more
unexpected and newsworthy papers I've published. It is well known that
eavesdroppers can reconstruct video screen content from radio frequency
emanations; up till now, such `Tempest attacks' were prevented by shielding,
jammers and so on. Our innovation was a set of techniques that enable the
software on a computer to control the electromagnetic radiation it emanates.
This can be used for both attack and defence. To attack a system, malicious
code can hide stolen information in the machine's Tempest emanations and
optimise them for some combination of reception range, receiver cost and
covertness. To defend a system, a screen driver can display sensitive
information using fonts which minimise the energy of RF emanations. This
technology is now fielded in PGP and eslewhere. You can download Tempest fonts
from here.
There is a followup
paper on the costs and benefits of Soft Tempest in military environments,
which appeared at NATO's 1999 RTO meeting on infosec, while an earlier version
of our main paper, which received considerable publicity, is
available here.
Finally, there's some software you can use to play your MP3s over the radio here, a press article
here
and information on more recent optical tempest attacks here.
Hollywood once hoped that copyright-marking systems would help control the
copying of videos, music and computer games. This became high drama when a paper that showed how to break
the DVD/SDMI copyright marking scheme was pulled by its authors from the Information
Hiding 2001 workshop, following legal threats from Hollywood. In fact, the
basic scheme - echo hiding - was among a number that we broke in 1997. The
attack was reported in our paper Attacks on
Copyright Marking Systems, which we published at Info Hiding 1998. We
also wrote Information
Hiding - A Survey, which appeared in Proc IEEE and is a good place to start
if you're new to the field. For the policy aspects, you might read Pam Samuelson. There is much more
about the technology on the web page of my former student Fabien Petitcolas.
Another novel application of information hiding is the Steganographic File
System. It will give you any file whose name and password you know, but if
you do not know the correct password, you cannot even tell that a file of that
name exists in the system! This is much stronger than conventional multilevel
security, and its main function is to protect users against coercion. Two of
our students implemented SFS for Linux: a paper describing the details is here, while the code
is available here. This
functionality has since appeared in a number of crypto products.
The threat by some governments to ban cryptography has led to a surge of
interest in steganography - the art of hiding messages in other messages. Our
paper On The Limits
of Steganography explores what can and can't be done; it appeared in a
special issue of IEEE JSAC. It developed from an earlier paper, Stretching the
Limits of Steganography, which appeared at the first international workshop
on Information Hiding in 1996. I also started a bibliography
of the subject which is now maintained by Fabien Petitcolas.
The Newton
Channel settles a conjecture of Simmons by exhibiting a high bandwidth
subliminal channel in the ElGamal signature scheme. It appeared at Info Hiding
96.
Medical information security is a subject in which I've worked on and off for
over a decade. It's highly
topical right now: the UK government is building a national database of
medical records, a project which many doctors oppose
(half of all GPs have said they won't
upload their patients' data). Ministers have given a guarantee of
patient privacy, but gone
ahead with collecting data anyway; GPs
and NGOs
are sceptical; see comments on broken government promises here.
There is an article
with some examples of privacy abuses, and a report that
the Real IRA penetrated the Royal Victoria Hospital in Northern Ireland and
used its electronic medical records to gather information on policemen to
target them and their families for murder. (The loyalists have recently been copying them.)
The Ministry of Defence now insists that soldiers using NHS hospitals have
their records coded using false names; however, civilians have less protection.
In one famous case, Helen Wilkinson needed to organise a debate
in Parliament to get ministers to agree to remove defamatory and untrue
information about her from NHS computers. The minister assured the House that
the libels had been removed; months later, they still had not been. Helen
started www.TheBigOptOut.org to
persuade patients to opt out of the databases. The liveliest forum on all this
is here. There follow
my most recent papers on the subject.
I wrote a 2006 report
for the National Audit Office on the health IT expenditure, strategies and
goals of the UK and a number of other developed countries. This showed that the
NHS National Program for IT is in many ways an outlier, and high-risk.
I am one of the authors of a report on the safety and
privacy of children's databases, done for the UK Information Commissioner;
it concluded that government plans to link up most of the public-sector
databases that hold information on children are misguided. The proposed systems
will be both unsafe and illegal. This report got a lot of
publicity. I spoke on these issues on three
videos made by Action on Rights for Children.
Here is an article I
wrote for Drugs and Alcohol Today analysing the likely effects of the NHS
computing project on patient privacy, particularly in the rehabilitation field.
Patient confidentiality
and central databases appeared in the February 2008 British Journal of
General Practice, calling on GPs to encourage patients to opt out of the NHS
care records service.
Civil servants started pushing for online access to everyone's records in 1992
and I got involved in 1995, when I started consulting for the British Medical
Association on the safety and privacy of clinical information systems. Back
then, the police were given access to all drug prescriptions in the UK, after
the government argued that they needed it to catch the occasional doctor who
misprescribed heroin. The police got their data, they didn't catch Harold Shipman, and
no-one was held accountable.
The NHS slogan in 1995 was `a unified electronic patient record, accessible
to all in the NHS'. The slogan has changed several times, but the goal remains
the same. The Health and Social Care
(Community Health and Standards) Act allowed the Government access to all
medical records in the UK, for the purposes of `Health Improvement'. It
removed many of the patient privacy safeguards in previous legislation. In
addition, the new contract
offered to GPs since 2003 moves ownership of family doctor computers to Primary
Care Trusts (that's health authorities, in oldspeak). There was a token
consultation on confidentiality; the Foundation
for Information Policy Research, which I chair, published a response to it (which was of course ignored).
The last time people pointed out that Department of Health civil servants were
helping themselves illegally to confidential health information, Parliament
passed regulations to legalise their questionable practices, under an Act
that was rushed through in the shadow of the 2001 election and that gave
ministers the power to nationalise personal health information. For the
background to that Act, see an editorial from the
British Medical Journal, a discussion
paper on the problems that the bill could cause for researchers, and an impact
analysis commissioned by the Nuffield Trust. Ministers claimed the records
were needed for cancer registries: yet cancer researchers in many other
countries work with anonymised data (see papers on German cancer registries here and here, and the
website of the Canadian Privacy
Commissioner.) There was contemporary press coverage in the Observer, the New Statesman, and The Register. The
measure has now been consolidated as sections 251 and 252 of the NHS Act 2006, and a
committee called PIAG oversees nonconsensual access to your health records.
Here are some historical, but still relevant, papers that I mostly wrote in
1995-6, when the government last tried to centralise all medical records - and
we saw them off.
Security in Clinical Information Systems was published by the British
Medical Association in January 1996. It sets out rules that can be used to
uphold the principle of patient consent independently of the details of
specific systems. It was the medical profession's initial response to the
safety and privacy problems posed by centralised NHS computer systems.
An
Update on the BMA Security Policy appeared in June 1996 and tells the story
of the struggle between the BMA and the government, including the origins and
development of the BMA security policy and guidelines.
There are comments made
at NISSC 98 on the healthcare protection profiles being developed by NIST for
the DHHS to use in regulating health information systems privacy. The
protection profiles make a number of mistaken assumptions about the threats to
medical systems and of the kind of protection mechanisms that are
appropriate.
Remarks
on the Caldicott Report raises a number of issues about policy as it was
settled in the late 1990s. The Caldicott Committee was set up by the Major
government to kick the medical privacy issue into touch until after the 1997
election. Its members failed to understand that medical records from which the
names have been removed, but where NHS numbers remain, are not really
anonymous - as large numbers of people in the NHS can map names to numbers (and
need to do this in order to do their jobs).
The
DeCODE Proposal for an Icelandic Health Database analyses a proposal to
collect all Icelanders' medical records into a single database. I evaluated
this for the Icelandic Medical Association and concluded that the proposed
security wouldn't work. The company running it has since hit financial
problems but the
ethical issues remain, and Iceland's
Supreme Court recently allowed a woman to block access to her father's
records because of the information they may reveal about her. (These issues may
recur in the UK with the proposed Biobank database.) I also wrote an analysis
of security targets prepared under the Common Criteria for the evaluation of
this database. For more, see BMJ
correspondence,
the Icelandic doctors
opposing the database, and an article by Einar
Arnason.
Clinical
System Security - Interim Guidelines appeared in the British Medical
Journal on 13th January 1996. It advises healthcare professionals on prudent
security measures for clinical data. The most common threat is that private
investigators use false-pretext telephone calls to elicit personal health
information from assistant staff.
A
Security Policy Model for Clinical Information Systems appeared at the 1996
IEEE Symposium on Security and Privacy. It presents the BMA policy model to the
computer security community in a format comparable to policies such as
Bell-LaPadula and Clark-Wilson. It had some influence on later US health
privacy legislation (the Kennedy-Kassebaum Bill, now HIPAA).
Problems
with the NHS Cryptography Strategy points out a number of errors in, and
ethically unacceptable consequences of, a report on
cryptography produced for the Department of Health. These comments formed the
BMA's response to that report.
Two health IT papers by colleagues deserve special mention. Privacy in clinical
information systems in secondary care describes a hospital system
implementing something close to the BMA security policy (it is described in
more detail in a special issue of the Health
Informatics Journal, v 4 nos 3-4, Dec 1998, which I edited). Second, Protecting Doctors'
Identity in Drug Prescription Analysis describes a system designed to
de-identify prescription data for commercial use; although de-identification
usually does not protect patient privacy very well, there are exceptions, such
as here. This system led to a court case, in which the government tried to stop
its owner promoting it - as it would have competed with their (less
privacy-friendly) offerings. The government lost: the Court of Appeal decided
that personal health information can be used for research without patient
consent, so long as the de-identification is done competently.
I chair the Foundation for Information Policy
Research, which I helped set up in 1998. This body is concerned with
promoting research and educating the public in such topics as the interaction
between computing and the law, and the social effects of IT. We are not a lobby
group; our enemy is ignorance rather than the government of the day, and one of
our main activities is providing accurate and neutral briefing for politicians
and members of the press. Here's an overview of
the issues as we saw them in 1999; some highlights of our work follow.
Key Escrow: My first foray into policy, in 1995, was Crypto in Europe -
Markets, Law and Policy which surveyed the uses of cryptography in Europe
and discussed the shortcomings of public policy. In it, I pointed out that law
enforcement communications intelligence was mostly about traffic analysis and
criminal communications security was mostly traffic security. This was
considered heretical at the time but is now well known. The Risks of Key Recovery, Key
Escrow, and Trusted Third-Party Encryption became perhaps the most widely
cited publication on key escrow. It examines the technical risks, costs, and
implications of deploying systems that would satisfy government wishes. It
was originally presented as testimony to the US Senate, and then also to the Trade
and Industry Committee of the UK House of Commons, together with a further
piece I wrote, The Risks and Costs
of UK Escrow Policy.
The GCHQ
Protocol and its Problems pointed out a number of serious defects in
the protocol
that the British government used to secure its electronic mail, and which it
wanted everyone else to use too. This paper appeared at Eurocrypt 97 and it
replies to GCHQ's response
to an earlier version of
our paper. Our analysis prevented the protocol from being widely adopted. The
Global Trust Register is a book of the fingerprints of the world's most
important public keys. It thus implements a top-level certification authority,
but using paper and ink rather than electronics. If the DTI had pushed through
mandatory licensing of cryptographic services, this book would have been banned
in the UK. At a critical point in the lobbying, it enabled me to visit Culture
Secretary Chris Smith and ask why his government wanted to ban my book. This
got crypto policy referred to Cabinet when otherwise it would have been snuck
through by the civil servants.
This work on key escrow and related topics led up to a campaign that FIPR ran
to limit the scope of the Regulation of
Investigatory Powers Act. Originally this would have allowed the police to
obtain, without warrant, a complete history of everyone's web browsing activity
(under the rubric of `communications data'); an amendment FIPR got through the
Lords limited this to the identity of the machines involved in a communication,
rather than the actual web pages.
E-Commerce: FIPR also brought together legal and computing
experts to deconstruct the fashionable late-1990s notion that `digital
certificates' would solve all the problems of e-commerce and e-government. The
mandatory reading for anyone inclined to believe PKI salesmen's claims is Electronic
Commerce - Who Carries the Risk of Fraud?. Other work in this thread
include FIPR's responses to consultations on smartcards, the electronic signature
directive and the ecommerce
bill. Much of what we wrote has direct relevance today - to ID cards, to
Trusted Computing and to debates on liability for ATM fraud and phishing.
Terrorism: A page with Comments on Terrorism
explains why many of the measures that various people have been trying to sell
since the 11th September attacks are unlikely to work as promised. Much
subsequent policy work has been made harder by assorted salesmen, centralisers,
rent-seekers and chancers talking about terror; I recently testified against police
attempts to increase pre-charge detention to ninety days with the implausible
claim that they needed more time to decrypt seized data. We must constantly push back on the scaremongers.
This issue revived in 2003, with a government attempt to wrest back using
regulations much of what they conceded in parliament. FIPR fought back
and extracted assurances
from Lord Sainsbury about the interpretation of regulations made under the
Act. This may seem technical, but is important for British science and academic
freedom. Without our campaign, much scientific collaboration would have become
technically illegal, leaving scientists open to arbitrary harrassment. Much
credit goes to the Conservative frontbencher Doreen
Miller, Liberal Democrat frontbencher Margaret
Sharp, and the then President of the Royal Society Bob May, who
marshalled the crossbenchers in the Lords. We are very grateful to them for
their efforts.
Trusted Computing was a focus in 2002-03. I wrote a Trusted Computing FAQ
that was very widely read, followed by a study of the
competition policy aspects of this technology. This led inter alia to a symposium organised by the German government which
in turn pushed the Trusted Computing Group into incorporating, admitting small
companies, and issuing implementation guidelines. The next act in this drama
awaits the launch of Windows Vista.
IP Enforcement: Our top priority in 2003-04 was the EU IPR
enforcement directive, which has been succinctly described as DMCA
on steroids and also criticised by a
number of distinguished lawyers. Our lobbying helped secure some positive
amendments - notably, removing criminal sanctions and legal protection for
devices such as RFID tags. Here are further criticisms of the directive by AEL. This law
was supported by Microsoft, since convicted of anticompetitive behaviour; the
music industry
and the owners of luxury brands such as Yves Saint Laurent, while it was
opposed by phone companies, supermarkets, smaller software firms and the free
software community. The press was sceptical - in Britain,
France and even America. The issue is even linked to a boycott of
Gillette. There is more on my blog.
Identity Cards were a clever political ploy; they divided
the Conservatives in 2004-5, setting the authoritarian leader Michael Howard
against the libertarian majority in his shadow cabinet. But they are not good
security engineering. I testified to the Home Affairs committee in 2004 that they
would not work as advertised, and contributed to the LSE
Report that spelled this out in detail. I'd produced numerous previous
pieces in response to government identity consultations, on aspects such as smartcards and PKI. There's more
in my book (ch. 6). Why
are ministers and officials incapable of listening to scientific advice?
Internet Censorship is a growing problem, and not just in
developing countries. In 1995, I tried to forestall it by invneting the Eternity
Service (a precursor of later file-sharing systems - see above). But despite the technical difficulties and
collateral costs of content filtering, governments aren't giving up. In 2006, I
became a principal investgator for the OpenNet Initiative which monitors
Internet filtering worldwide. A recent publication is Shifting
Borders, which reviews the state of play in late 2007, and appeared in the
Index
on Censorship.
My pro-bono work also includes sitting on Council, our University's governing
body. I stood for election in 2002 because I was concerned about the erosion of
academic freedom. See, for example a truly shocking
speech by Mike Clark, who tells how our administration promised a research
sponsor that he would submit all his relevant papers to them for prior review -
without even asking him! To prevent abuses like this, we founded the Campaign for Cambridge
Freedoms, and campaigned to defeat a proposal that most of the intellectual
property generated by faculty members - from patents on bright ideas to books
written up from lecture notes - would belong to the university rather than to
its creator. Over almost four years of campaigning we drew many of its teeth.
The final vote approved a policy in
which academics keep copyright but the University gets 15% of patent royalties.
I got re-elected to Council in 2006, when I topped the poll.
Finally, here is my PGP
key. If I revoke this key, I will always be willing to explain why I have
done so provided that the giving of such an explanation is lawful. (For
more, see FIPR.)
Security engineering is about building systems to remain dependable in the face
of malice, error or mischance. As a discipline, it focuses on the tools,
processes and methods needed to design, implement and test complete systems,
and to adapt existing systems as their environment evolves. My book has become
the standard textbook and reference since it was published in 2001. You can
download the first edition without charge here.
Security engineering is not just concerned with infrastructure matters such as
firewalls and PKI. It's also about specific applications, such as banking and
medical record-keeping, and about embedded systems such as automatic teller
machines and burglar alarms. It's usually done badly: it often takes several
attempts to get a design right. It is also hard to learn: although there were
good books on a number of the component technologies, such as cryptography and
operating systems, there was little about how to use them effectively, and even
less about how to make them work together. Most systems don't fail because the
mechanisms are weak, but because they're used wrong.
My book was an attempt to help the working engineer to do better. As well as
the basic science, it contains details of many applications - and lot of case
histories of how their protection failed. It contains a fair amount of new
material, as well as accounts of a number of technologies which aren't well
described in the accessible literature. Writing it was also pivotal in founding
the now-flourishing field of information security
economics: I realised that the narrative had to do with incentives and
organisation at least as often as with the technology. The second edition
incoporates the economic perspectives we've developed over the past six years,
and new perspectives from the psychology of security, as well as updating the
technological side of things.
I don't execute programs sent by strangers without good reason. So I don't
read attachments in formats such as Word, unless by prior arrangement. I also
discard html-only emails, as most of them are spam; and emails asking for
`summer research positions' or `internships', which we don't do.
If you're contacting me about coming up to do a PhD, please read the relevant web pages
first.