Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Ross Anderson's Home Page
Rendezvous is a
prototype search engine for code, which we hope will help bring software
reverse engineering into the 21st century (blog).
Security
Economics – A Personal Perspective is an invited paper I gave at ACSAC 2012 telling the story of how security economics
got going as a subject. This is often credited to a paper I gave at ACSAC 2001
but the real story is more complex. Another security-economics paper, Measuring
the Cost of Cybercrime, sets out to debunk the scaremongering around online
crime that governments and defence contractors are using to justify everything
from increased surveillance to preparations for cyberwar. It appeared at WEIS (press: BBCWSJPC
WorldComputerworld)
By default, when I post a paper here I license it under the relevant Creative Commons
license, so you may redistribute it with attribution but not modify it. I
may subsequently assign the residual copyright to an academic publisher.
As systems scale globally, incentives start to matter as much as technology.
Systems break when the people who could fix them are not the people who suffer
the costs of failure. So it's not enough for security engineers to understand
cryptomathematics and the theory of operating systems; we have to understand
game theory and microeconomics too. This has led to a rapidly growing interest
in ‘security economics’, a discipline I helped to found. This
discipline is starting to embrace dependability and software economics; at the
other end, it's growing through bevaioural economics into the psychology of
security. I maintain
the Economics and
Security Resource Page and a similar web page
on Security
Psychology. There is also a web page on
the economics
of privacy, maintained by Alessandro Acquisti. My research contributions
include the following.
Measuring
the Cost of Cybercrime sets out to debunk the scaremongering around online
crime that governments and defence contractors are using to justify everything
from increased surveillance to preparations for cyberwar. It was written in
response to a request from the UK Ministry of Defence, and appeared at WEIS 2012 (press: BBCWSJPC
WorldComputerworld)
We've written a major
report for ENISA on the
Resilience of the Internet interconnection ecosystem which has been
adopted as ENISA policy. Here is the full
report (238 pages) and, for the busy, the 31-page executive
summary. We believe this is the first time anyone has documented how the
Internet actually works in practice, as opposed to in theory; we spent a lot of
time speaking to network operators about how they negotiate peering and
transit, what goes wrong, how they deal with failures and where the incentives
for resilience are inadequate.
The Economics of
Online Crime appeared in the Journal of Economic Perspectives; it looks at
the econometrics of fraud and phishing, and makes a number of suggestions for
improving the responses of banks and law-enforcement agencies.
The Impact
of Incentives on Notice and Take-down examines how take-down speed varies
with the incentive of the party requesting removal. Banks are quick to remove
phishing websites that mention them by name, but they ignore mule recruitment
websites because it's harder to tell which bank will be affected.
We have two futher papers on security economics in banking. The first is
on Verified by
VISA – the mechanism that asks for your card password when you shop
online. This is an example of how a poor design can win out if it has strong
deployment incentives (see
also blog
post and slides).
The second, On
the Security of Internet Banking in South Korea, analyses the effects of
Korea's decision to use national cryptography standards for Internet banking
rather than just using the same protocols as the rest of the world.
The Trust Economy
of Brief Encounters argues that as transactions become more transient, we
will have to authenticate more; it appeared at the protocols workshop in 2009.
We did a major study of Security Economics in the Single Market for the European Network
and Information Security Agency. We looked at the market failures underlying
spam, phishing and other online problems, and made concrete policy proposals,
some of which have been adopted. A shorter
version (62 pages) appeared at WEIS 2008 (slides)
and an even
shorter version (25 pages), at ISSE.
Closing the
Phishing Hole – Fraud, Risk and Nonbanks reports research on payment
regulation commissioned by the US Federal Reserve. This paper identified speedy
asset recovery as the best way to deter online fraud and rapid, irrevocable
payment instruments (such as Western Union) as a systemic threat.
Why Information
Security is Hard – An Economic Perspective was the paper that got
information security people thinking about economics. It applies microeconomic
analysis to explain many phenomena that security folks had found to be
pervasive but perplexing.
On Dealing with
Adversaries Fairly applies election theory (also known as social choice
theory) to the problem of shared control in distributed systems.
The Economics
of Censorship Resistance examines when it is better for defenders to
aggregate or disperse. Should file-sharers build one huge system like gnutella
and hope for safety in numbers, or should everyone just share the stuff they
care about? More generally, what are the tradeoffs between diversity and
solidarity when conflict threatens? (This is a live topic in social policy
–
see David
Goodhart's essay, a response in
the Economist,
and a post
by Clay
Shirkey.) This paper appeared
at WEIS 2004.
There are two annual workshops I helped establish. On the psychology side, the
Security and Human Behaviour workshop is great fun and hugely productive. See
the papers and the liveblog
for 2012; and
the links to the workshops for 2008-11. On the economic side,
the Workshop on Economics and
Information Security is now into its twelfth year and attracts over a
hundred participants.
Since about 2000, there has been an explosion of interest in peer-to-peer
and ad-hoc networking. One of the seminal papers was The Eternity
Service, which I presented at Pragocrypt 96. I had been alarmed by the
Scientologists' success at closing down the penet remailer
in Finland, and have more than once been threatened
by lawyers who did not want me to comment on the security of their clients'
systems. Yet the modern era only started once the printing press enabled
seditious thoughts to be spread too quickly and widely to ban. But when books no
longer exist as tens of thousands of paper copies, but as a file on a single
server, will government ministers and judges be able to unpublish them once
more? (This has
since happpened
to newspaper archives in Britain.) So I invented the Eternity Service as a
means of putting electronic documents beyond the censor's grasp. The Eternity Service
inspired second-generation censorship-resistant systems such as Publius and Freenet; one descendant of these
early systems is wikileaks. Our main
contribution nowadays lies in helping to maintain Tor, the anonymity service used by
wikileaks and by many others.
But the biggest deal turned out to be not sedition, or even pornography, but
copyright. Hollywood's action against Napster led to our ideas being
adopted in filesharing systems. Many of these developments were described here,
and discussed at conferences like this one. See also
Richard Stallman's classic, The Right to Read.
Many of the ideas in early peer-to-peer systems reemerged in the study of ad-hoc
and sensor networks and are now spilling over into social networking systems.
My contributions since the Eternity paper include the following.
An
Experimental Evaluation of Robustness of Networks studies the best attack
and defence strategies in different kinds of networks. It builds on an earlier
paper, The
topology of covert conflict which asked how the police can best target an
underground organisation given some knowledge of its patterns of communication,
and how might they in turn might react to various law-enforcement strategies.
Our framework combines ideas from network analysis and evolutionary game
theory to explore the interaction of attack and defence strategies in
networks.
I was a designer of the security of Homeplug AV, an industry standard for
broadband communication over the power mains. Here's
a paper
on what we did and why; it's is a good worked example of how to do key
establishment in a real ad-hoc system. The core problem is: how can you be sure
you're recruitng the right device, rather than a similar one nearby?
Sybil-resistant DHT
routing appeared at ESORICS 2005 and showed how we can
make peer-to-peer systems more robust against disrutpive attacks if we know
which nodes introduced which other nodes.
Key
Infection - Smart trust for Smart Dust appeared at ICNP 2004 and presents a radically new
approach to key management in sensor and peer-to-peer networks. Peers establish
keys opportunistically and use resilience mechanisms against later node
compromise. This work challenged the assumption that authentication is largely
about bootstrapping.
The Economics of
Censorship Resistance examines when it is better for defenders to aggregate
or disperse. Should file-sharers build one huge system like gnutella and hope
for safety in numbers, or would a loose federation of fan clubs for different
bands work better?
The Cocaine
Auction Protocol explored how transactions can be conducted between
mutually mistrustful principals with no trusted arbitrator, while giving a high
degree of privacy against traffic analysis.
The XenoService
– A Distributed Defeat for Distributed Denial of Service described
defeating DDoS attacks using a network of web hosts that can respond to an
attack on a site by replicating it rapidly and widely. It used Xen, a
hypervisor developed at Cambridge for distributed hosting, which led to
another startup.
I ran a CMI project with Frans Kaashoek and Robert Morris on building a
next-generation peer-to-peer system. I gave a keynote talk about this at the
2004 Wizards of OS
conference in Berlin; the slides are here.
I have been interested for many years in how security systems fail in real
life. This is a prerequisite for building robust secure systems; many security
designs are poor because they are based on unrealistic threat models. This work
began with a study of automatic teller machine fraud, and expanded to other
applications as well. It provides the central theme of my book. I also have a
separate page on bank
security which gathers together all our papers on fraud in payment systems
with some additional material.
Who
controls the off switch? describes the strategic vulnerability created by
the UK plan to replace 47m gas and electricity meters with ‘smart
meters’ that can be switched off remotely.
On a New Way to
Read Data from Memory describes techniques we developed that use lasers to
read out memory contents directly from a chip, without using the read-out
circuits provided by the vendor. The work builds on methods described in Optical Fault
Induction Attacks, which showed how laser pulses could be used to induce
faults in smartcards that would leak secret information. That paper appeared at
CHES 2002; it made the
front page of the New York
Times and also got covered by slashdot.
It led to the field of semi-invasive attacks on semiconductors, pioneered by my
then research student Sergei Skorobogatov.
On the Security
of Digital Tachographs successfully predicted how the introduction of
smartcard-based digital tachographs throughout Europe from 2005 would affect
fraud and tampering.
How to Cheat
at the Lottery reports a novel and, I hope, entertaining experiment in
software requirements engineering.
The Grenade
Timer describes a novel way to protect low-cost processors against
denial-of-service attacks, by limiting the number of cycles an application can
consume.
The Millennium
Bug – Reasons Not to Panic describes our experience in coping with
the bug at Cambridge University and elsewhere. This paper correctly predicted
that the bug wouldn't bite very hard. Journalists were not interested, despite
a major press
release by the University: I later discussed what we could learn from the
incident in a radio
interview with Stephen Fry.
The Memorability
and Security of Passwords -- Some Empirical Results tackles an old problem
- how do you train users to choose passwords that are easy to remember but hard
to guess? We did a randomized controlled trial with a few hundred first year
science students which confirmed some folk beliefs, but debunked some others.
This became one of the classic papers on security usability.
Murphy's law,
the fitness of evolving species, and the limits of software reliability
applies the techniques of statistical thermodynamics to the failure modes of
any complex system that evolves under testing. It provides a common mathematical
model for the reliability growth of complex computer systems and for biological
evolution. Its findings are in close agreement with empirical data, and it
inspired later
work in security economics.
Security
Policies play a central role in secure systems engineering. They provide a
concise statement of the kind of protection a system is supposed to achieve.
This article is a security policy tutorial.
Combining
cryptography with biometrics shows that in those applications where you can
benefit from biometrics, you often don't need a large central database (as
proposed for Britain's ID card). There are
smarter and less privacy-invasive ways to arrange things.
Many security system failures are due to poorly designed protocols, and this
has been a Cambridge interest for many years. Some relevant papers follow.
Can We Fix the
Security Economics of Federated Authentication? explores how protocols work,
or fail, at global scale. How can we deal with a world in which your mobile
phone contains your credit cards, your driving license and even your car key
– and in particular what happens when it gets stolen or infected? (blog)
What Next
after Anonymity? argues that it isn't enough to worry about the
confidentiality of metadata (anonymity); we sometimes need to protect their
integrity as well.
API Level
Attacks on Embedded Systems are a powerful way to attack cryptographic
processors, and indeed any systems where more trusted processes talk to less
trusted ones. We found that a "secure" device can often be defeated by sending
it some sequence of transactions which its designer did not expect. We've
defeated pretty well every security processor we've looked at, at least once.
This line of research started at Protocols 2000 with The Correctness of
Crypto Transaction Sets; more followed in the first edition of
my book. Robbing
the bank with a theorem prover shows how to apply advanced tools to the
problem, and ideas for future research can be found in Protocol
Analysis, Composability and Computation. For a snapshot of how this
interacts with physical security, see
our survey of
cryptographic processors, a shortened version of which appeared in the
February 2006 Proceedings of the IEEE. An up-to-date survey of API attacks can
be found in the second edition of
my my
book.
Programming
Satan's Computer is a phrase Roger Needham and I coined to express
the difficulty of designing cryptographic protocols; it has recently been
popularised by Bruce Schneier (see, for example, his foreword to my book). The problem of
designing programs which run robustly on a network containing a malicious
adversary is rather like trying to program a computer which gives subtly wrong
answers at the worst possible moment.
Robustness
principles for public key protocols gives a number of attacks on protocols
based on public key primitives. It also puts forward some principles which can
help us to design robust protocols, and to find attacks on other people's
designs. It appeared at Crypto 95.
The Cocaine
Auction Protocol explores how transactions can be conducted between
mutually mistrustful principals with no trusted arbitrator, even in
environments where anonymous communications make most of the principals
untraceable.
NetCard - A
Practical Electronic Cash Scheme presents research on micropayment
protocols for use in electronic commerce. We invented tick payments
simultaneously with Torben Pedersen and with Ron Rivest and Adi Shamir; we all
presented our work at Protocols 96.
The GCHQ
Protocol and its Problems pointed out a number of flaws in a key management
protocol promoted by GCHQ as a European alternative to Clipper, until we shot
it down with this paper at Eurocrypt 97. Many of the criticisms we developed
here also apply to the more recent, pairing-based cryptosystems.
The Formal
Verification of a Payment System describes the first use of formal methods
to verify an actual payment protocol, which was (and still is) used in an
electronic purse product (VISA's COPAC card). This is a teaching example I use
to get the ideas of the BAN logic across to undergraduates. There is further
detailed information in a technical
report, which combines papers given at ESORICS 92 and Cardis 94.
On Fortifying
Key Negotiation Schemes with Poorly Chosen Passwords presents a simple way
of achieving the same result as protocols such as EKE, namely preventing
middleperson attacks on Diffie-Hellman key exchange between two people whose
shared secret could be guessed by the enemy.
Protocols have been the stuff of high drama. Citibank asked the High Court to
gag the
disclosure of certain crypto API
vulnerabilities that affect a number of systems used in banking. I wrote to
the judge opposing
this; a gagging
order was still imposed, although in slightly less severe terms than
Citibank had requested. The trial was in camera, the banks' witnesses didn't
have to answer questions about vulnerabilities, and new information revealed
about these vulnerabilities in the course of the trial may not be disclosed in
England or Wales. Information already in the public
domain was unaffected. The vulnerabilities were discovered by Mike Bond and me while acting as the
defence experts in a phantom withdrawal court case, and independently discovered
by the other side's expert, Jolyon Clulow, who later joined us as
a research student. They are of significant scientific interest, as well as being
relevant to the rights of the growing number of people who suffer phantom withdrawals from their bank
accounts worldwide. Undermining the fairness of trials and forbidding discussion
of vulnerabilities isn't the way forward (press coverage by the
Register and
news.com).
Reports of
an attack on the hash function SHA have made Tiger, which Eli Biham and I designed in 1995,
a popular choice of cryptographic hash function. I also worked with Eli, and
with Lars Knudsen, to develop Serpent – a
candidate block cipher for the Advanced
Encryption Standard. Serpent won through to the final of the competition and
got the second largest number of votes. Another of my contributions was founding
the series of workshops on Fast Software
Encryption.
Other papers on cryptography and cryptanalysis include the following.
The Dancing Bear
– A New Way of Composing Ciphers presents a new way to combine crypto
primitives. Previously, to decrypt using (say) any three out of five keys, the
keys all had to be of the same type (such as RSA keys). With my new
construction, you can mix and match - RSA, AES, even one-time pad. The paper
appeared at the 2004 Protocols Workshop; an earlier version came out at the FSE 2004 rump session.
Two Remarks on
Public Key Cryptology is a note on two ideas I floated at talks I gave in
1997-98, concerning forward-secure signatures and compatible weak keys. The
first of these has inspired later research by others; the second gives a new
attack on public key encryption.
Two
Practical and Provably Secure Block Ciphers: BEAR and LION shows how to
construct a block cipher from a stream cipher and a hash function. We had
already known how to construct stream ciphers and hash functions from block
ciphers, and hash functions from stream ciphers; so this paper completed the
set of elementary reductions. It also led to the "Dancing Bear" above.
Tiger –
A Fast New Hash Function defines a new hash function, which we designed
following Hans Dobbertin's attack on MD4. This was designed to run extremely
fast on the new 64-bit processors such as DEC Alpha and IA64, while still
running reasonably quickly on existing hardware such as Intel 80486 and
Pentium (the above link is to the Tiger home page, maintained in Haifa by Eli
Biham; if the network is slow, see my UK mirrors of the Tiger paper, new and old reference
implementations (the change fixes a padding bug) and S-box generation
documents. There are also third-party crypto toolkits supporting Tiger,
such as that from Bouncy Castle).
Minding your
p's and q's points out a number of things that can go wrong with the choice
of modulus and generator in public key systems based on discrete log. It
elucidated some of the previously classified reasoning behind the design of the
US Digital Signature Algorithm, and appeared at Asiacrypt 96.
Chameleon
– A New Kind of Stream Cipher shows how to do traitor tracing using
symmetric rather than public-key cryptology. The idea is to turn a stream
cipher into one with reduced key diffusion, but without compromising
security. A single broadcast ciphertext is decrypted to slightly different
plaintexts by users with slightly different keys. This paper appeared
at Fast Software
Encryption in Haifa in January 1997.
Searching
for the Optimum Correlation Attack shows that nonlinear combining functions
used in nonlinear filter generators can react with shifted copies of themselves
in a way that opens up a new and powerful attack on many cipher systems. It
appeared at the second workshop on fast software encryption.
The Classification of
Hash Functions showed that correlation freedom is strictly stronger than
collision freedom, and shows that there are many pseudorandomness properties
other than collision freedom which hash functions may need. It appeared at
Cryptography and Coding 93.
A Faster Attack
on Certain Stream Ciphers shows how to break the multiplex shift register
generator, which is used in satellite TV systems. I found a simple
divide-and-conquer attack on this system in the mid 1980's, a discovery that
got me "hooked" on cryptology. This paper is a refinement of that work.
On Fibonacci
Keystream Generators appeared at FSE3, and shows how to break "FISH", a
stream cipher proposed by Siemens. It also proposes an improved cipher, "PIKE",
based on the same general mechanisms.
From the mid- to late-1990s, I did a lot of work on information hiding.
Soft Tempest:
Hidden Data Transmission Using Electromagnetic Emanations must be one of
the more unexpected things we discovered. It is well known that eavesdroppers
can reconstruct digital information such as video screen content from stray
radio frequency emanations; such `Tempest attacks' were traditionally prevented
by shielding, jammers or physical distance. We discovered that the software on
a computer can control its stray electromagnetic emanations. To attack a
system, malware can hide stolen information in signals that leak and optimise
them for some combination of reception range, receiver type or even covertness.
To defend a system, a screen driver can display sensitive information using
fonts which minimise emitted RF energy. This technology was fielded in PGP and
elsewhere. You can download Tempest fonts from here.
A followup
paper on the costs and benefits of Soft Tempest in military environments
appeared at NATO's 1999 RTO meeting on infosec, while an earlier version of our
main paper, which
received considerable
publicity, is
available here.
Finally, there's some software you can use to play your MP3s over the
radio here, a press article
here
and information on more recent optical tempest attacks here.
Hollywood once hoped that copyright-marking systems would help control the
copying of videos, music and computer games. This became high drama when a paper that showed how to break
the DVD/SDMI copyright marking scheme was pulled by its authors from the Information
Hiding 2001 workshop, following legal threats from Hollywood. In fact, the
basic scheme – echo hiding – was among a number that we broke in
1997. The attack was reported in our paper Attacks on
Copyright Marking Systems, which we published at Info Hiding 1998. We
also wrote Information
Hiding – A Survey, which appeared in Proc IEEE and is a good place to
start if you're new to the field. For the policy aspects, you might read Pam Samuelson. There is much more
about the technology on the web page of my former student Fabien Petitcolas.
Another novel application of information hiding is the Steganographic File
System. It will give you any file whose name and password you know, but if
you do not know the correct password, you cannot even tell that a file of that
name exists in the system! This is much stronger than conventional multilevel
security, and its main function is to protect users against coercion. Two of
our students implemented SFS for Linux: a paper describing the details is here, while the code
is available here. This
functionality has since appeared in a number of crypto products.
The threat in the 1990s by some governments to ban cryptography led to a
surge of interest in steganography – the art of hiding messages in other
messages – and then the surge of paranoia post-9/11 stoked interest in
looking for them, with nonsense
like this boosting many a bureaucrat's budget. Our paper On The Limits
of Steganography explored what can and can't be done (here's an earlier version).
The Newton
Channel settles a conjecture of Simmons by exhibiting a high bandwidth
subliminal channel in the ElGamal signature scheme.
There's a huge row
brewing over the new government's plans to centralise medical records; the
cover story is giving us access to our records online while the real agenda is
to give
access to drug company researchers. This follows a bigrow
under the last Government over the Summary Care Record, which centralises
records and makes them available to hundreds of thousands of NHS staff. Our
Government decided in 2002 to build a number of central medical databases, in a
£12bn project known as the the National Programme for IT, or NPfIT. By
2006 this project was visibly failing, so I organised 23 computer science
professors to write to the Health Committee
requesting an independent review; the
government refused. In 2009,
a report
we wrote for the Joseph Rowntree Reform Trust showed that many current and
proposed NHS databases break European law;
the I
v Finland case ruled that European citizens have a right to restrict our
medical data to clinicians involved directly in our care. This ruling made
centralised medical records unlawful in the absence of an opt-out. Both the
Conservatives and the Lib Dems promised to axe NPfIT if they won the 2010
election; after they did so, the name was dropped but the stupidity continued.
The NHS has a
long history
of privacyabuses. The
previous prime minister's own medical records were compromised; the miscreant
got off scot-free
as it was not in the "public interest" to prosecute him. In another famous
case, Helen Wilkinson had to organise a debate
in Parliament to get ministers to agree to remove defamatory and untrue
information about her from NHS computers. The minister assured the House that
the libels had been removed; months later, they still had not been. Helen
started www.TheBigOptOut.org to
campaign for health privacy. In a typical recent case, a woman was tracked
down by her ex-husband and seriously injured after his aunt looked up her name
and address in NHS systems. Her case is currently before the courts.
Here are my most recent papers on the subject.
Database
State is a report we wrote for the Joseph Rowntree Reform Trust on
the failings of public-sector IT in Britain, and how to fix them. It pointed
out that a number of health systems almost certainly break European law. There's
coverage on the BBC, in the Guardian (also here), the Mail (also
here),
the Independent, the Telegraph, E-Health Insider and Liberty Central. This report had a lot of impact; the coalition
government promised to abolish or at least change a number of the systems we
fingered as unlawful. However, although we killed the ID card and got some
children's databases aaxed, health systems seem to have escaped reform.
I was one of the authors of a 2006 report on the safety and
privacy of children's databases, done for the UK Information Commissioner.
It concluded that government plans to link up most of the public-sector
databases that hold information on children were misguided: the proposed systems
would be both unsafe and illegal. This report got a lot of
publicity. I spoke on these issues on three
videos made by Action on Rights for Children.
I wrote a report
for the National Audit Office on the health IT expenditure, strategies and
goals of the UK and a number of other developed countries. This showed that the
NHS National Program for IT is in many ways an outlier, and high-risk.
Here is an article I
wrote for Drugs and Alcohol Today analysing the likely effects of the NHS
computing project on patient privacy, particularly in the rehabilitation field.
Patient confidentiality
and central databases appeared in the February 2008 British Journal of
General Practice, calling on GPs to encourage patients to opt out of the NHS
care records service.
System security for
cyborgs discusses technical, ethical and security-economics issues to do
with implantable medical devices.
Civil servants started pushing for online access to everyone's records in 1992
and I got involved in 1995, when I started consulting for the British Medical
Association on the safety and privacy of clinical information systems. Back
then, the police were given access to all drug prescriptions, after the
government argued that they needed it to catch doctors who misprescribed
heroin. The police got their data, but they didn't
catch Harold Shipman,
and no-one was held accountable. The NHS slogan in 1995 was `a unified electronic patient record, accessible to
all in the NHS'. The BMA campaigned against this, arguing that it would destroy
patient privacy:
Security in Clinical Information Systems was published by the BMA in
January 1996. It sets out rules that uphold the principle of patient consent
independently of the details of specific systems. It was the medical
profession's initial response to the safety and privacy problems posed by
centralised NHS computer systems.
An
Update on the BMA Security Policy appeared in June 1996 and tells the story
of the struggle between the BMA and the government, including the origins and
development of the BMA security policy and guidelines.
There are comments made
at NISSC 98 on the healthcare protection profiles being developed by NIST for
the DHHS to use in regulating health information systems privacy, which made a
number of mistaken assumptions about threats and protection mechanisms.
Remarks
on the Caldicott Report raises a number of issues about the report of the
Caldicott Committee, which was set up by the Major government to kick the
medical privacy issue into touch until after the 1997 election. Its members
failed to understand that medical records from which the names have been
removed, but where NHS numbers remain, are not anonymous – as large
numbers of NHS staff need to map names to numbers in order to do their
jobs.
The
DeCODE Proposal for an Icelandic Health Database analyses a proposal to
collect all Icelanders' medical records into a single database. I evaluated
this for the Icelandic Medical Association and concluded that the proposed
security wouldn't work. The company running it soon hit financial
problems and later filed for bankruptcy. The ethical issues were a
factor: Iceland's
Supreme Court allowed a woman to block access to her father's
records because of the information they may reveal about her (see analysis).
This effectively killed the vision of having the whole population on a database.
I also wrote an analysis
of security targets prepared under the Common Criteria for the evaluation of
this database. See also BMJ
correspondence and an article by Einar
Arnason.
Clinical
System Security – Interim Guidelines appeared in the British Medical
Journal on 13th January 1996. It advises healthcare professionals on prudent
security measures for clinical data. The most common threat is that private
investigators use false-pretext telephone calls to elicit personal health
information from assistant staff.
A
Security Policy Model for Clinical Information Systems appeared at the 1996
IEEE Symposium on Security and Privacy. It presents the BMA policy model to the
computer security community in a format comparable to policies such as
Bell-LaPadula and Clark-Wilson. It had some influence on later US health
privacy legislation (the Kennedy-Kassebaum Bill, now HIPAA).
Problems
with the NHS Cryptography Strategy points out a number of errors in, and
ethically unacceptable consequences of, a report on
cryptography produced for the Department of Health. These comments formed the
BMA's response to that report.
In 1996, the Government set up the Caldicott Committee to study the
matter. Their report
made clear that the NHS was already breaking confidentiality law by sharing
data without consent; but the next Government
just legislated
(and regulated,
and again) to
give itself the power to share health data as the Secretary of State saw
fit. (We objected
and pointed out
the problems the
bill could cause; similar sentiments were expressed in
a BMJ editorial,
and a Nuffield
Trust impact
analysis, and BMJ
letters here
and here. Ministers
claimed the records were needed for cancer registries: yet cancer researchers
work with anonymised data in other countries – see
papers here
and here.)
There was a storm of protest in the press: see
the Observer,
the New Statesman,
and The
Register. But that died down; the measure has now been consolidated
as sections 251
and 252 of the NHS Act 2006, the Thomas-Walport review blessed nonconsensual
access to health records (despite FIPR pointing out that this was
illegal – a view later supported by the European Court). A government
committee, the NHS Information Governmance
Board, was set up oversee this lawbreaking, and Dame Fiona is being wheeled out once more. Centralised,
nonconsensual health records not only contravene the I v Finland judgement but
the Declaration
of Helsinki on ethical principles for medical research and
the Council of
Europe recommendation no R(97)5 on the protection of medical data.
Two health IT papers by colleagues deserve special mention. Privacy in clinical
information systems in secondary care describes a hospital system
implementing something close to the BMA security policy (it is described in
more detail in a special issue of the Health
Informatics Journal, v 4 nos 3-4, Dec 1998, which I edited). Second, Protecting Doctors'
Identity in Drug Prescription Analysis describes a system designed to
de-identify prescription data for commercial use; although de-identification
usually does not protect patient privacy very well, there are exceptions, such
as here. This system led to a court case, in which the government tried to stop
its owner promoting it – as it would have competed with their (less
privacy-friendly) offerings. The government lost: the Court of Appeal decided
that personal health information can be used for research without patient
consent, so long as the de-identification is done competently.
I chair the Foundation for Information Policy
Research, the UK's leading Internet policy think tank, which I helped set
up in 1998. We are not a lobby group; our enemy is ignorance rather than the
government of the day, and our mission is to understand IT policy issues and
explain them to policy makers and the press. Here's an overview of the issues as we saw them in 1999, and a video
of how we saw them ten years later in 2008. Some highlights of our work follow.
Privacy has become a big theme recently thanks to the Communications
Data Bill, against which we have been organising resistance.
In the last parliament, our Database
State report on the failings of public-sector IT in Britain, and how to fix
them, got massive press coverage: the BBC, the Guardian (also here), the Mail (also
here),
the Independent, the Telegraph, E-Health Insider and Liberty Central. This followed an earlier report on children's databases,
and the many other activities described above. Both main
opposition parties promised to kill or change at least some of these systems,
and after they won power in the 2010 election their coalition agreement spelled
the end of the ContactPoint children's database, and of ID cards. The subsequent
review by my FIPR
colleague Eileen Munro also sealed the fate of eCAF, another central children's
database system.
Waste of Public Money is another objection to the bad
government systems that undermine our privacy. Other wasteful systems include smart
meters which look set to cost billions without achieving anything useful (blog).
Identity Cards were a clever political move by Blair; they
divided the Conservatives in 2004-5. I testified
to the Home Affairs committee in 2004 that they would not work as advertised,
and contributed to the LSE Report that
spelled this out in detail. I'd produced numerous previous pieces in response
to government identity consultations, on aspects such as smartcards and PKI. There's more
in my book (ch. 6).
Internet Censorship is a growing problem, and not just in
developing countries; I've been on the receiving end more than once. In 1995, I invented the first
censorship-resistant system, the Eternity
Service; this was a precursor of later file-sharing systems (see above), and we've also written on the economics of
censorship resistance. But despite the technical difficulties and
collateral costs of content filtering, governments aren't giving up. From 2006
to 2008, I was a principal investgator for the OpenNet Initiative which monitors
Internet filtering worldwide. Shifting
Borders reviewed the state of play in late 2007, and appeared in Index
on Censorship; Tools
and Technology of Internet Filtering goes into more technical detail and
appeared in Access Denied. The
political action now is about Internet blocking.
Consumer Protection: FIPR also brought together legal and computing
experts to deconstruct the fashionable late-1990s notion that ‘digital
certificates’ would solve all the problems of e-commerce and e-government.
Anyone inclined to believe such nonsence should read Electronic
Commerce – Who Carries the Risk of Fraud?. Other work in this thread
include FIPR's responses to consultations on smartcards, the electronic signature
directive and the ecommerce
bill.
More recently we have been alarmed at the erosion of consumer rights as a
result of the introduction of chip and PIN cards. The technical sections above describe how frauds happen; the flip side of the story is how
the banks escape liability. Our analysis of the failings of the
Financial Ombudsman Service remains unanswered; see also FIPR's submission
on Personal Internet
Security (with which the House of Lords basically agreed)
and the National Payments Plan. FIPR
now takes the view that the only way to fix
consumer protection is to replace public action with private action, by changing
the rules on costs so that consumers can enforce their rights in court without
risking horrendous costs orders if they lose.
The RIP Act, the Crypto Wars and Key Escrow: I got engaged
in technology policy thanks to attempts in the 1990s by governments (led by the
USA) to control the use of cryptography. In 1995, I wrote Crypto in Europe
– Markets, Law and Policy. This surveyed the uses of cryptography in
Europe and discussed the shortcomings of crypto policy; I pointed out that law
enforcement communications intelligence was mostly about traffic analysis and
criminal communications security was mostly traffic security. This view was
heretical at the time but is now orthodoxy. The Risks of Key Recovery,
Key Escrow, and Trusted Third-Party Encryption became the most widely-cited
publication on key escrow. It examines the technical risks, costs, and
implications of deploying systems that would satisfy government wishes. It was
originally presented as testimony to the US Senate, and then also to the Trade
and Industry Committee of the UK House of Commons, together with a further
piece I wrote, The
Risks and Costs of UK Escrow Policy.
The GCHQ
Protocol and its Problems pointed out a number of serious defects in the protocol
that the British government used to secure its electronic mail, and which it
wanted everyone else to use too. This paper appeared at Eurocrypt 97 and it
replies to GCHQ's response to an earlier
version of our paper. Our analysis stopped the protocol being widely
adopted. The
Global Trust Register is a book of the fingerprints of the world's most
important public keys. It thus implements a top-level certification authority,
but using paper and ink rather than electronics. DTI proposals for mandatory
licensing of cryptographic services would have banned this book; that fact
enabled me to visit Culture Secretary Chris Smith at a critical point in the
crypto wars and get crypto policy referred to Cabinet when otherwise it would
have remained the province of the civil servants.
This was all part of a campaign that FIPR ran to limit the scope of the Regulation of Investigatory Powers Act.
Originally this would have allowed the police to obtain, without warrant, a
complete history of everyone's web browsing activity (as this would have been
‘communications data’); FIPR got the House of Lords to limit this to
the identity of the machines involved in a communication, rather than the URLs
of the web pages. But the RIP Act still made it into law and has had a number of
the bad effects we predicted at the time. See for example
an op-ed
I wrote on the history of the Act following
the unfortunate
imprisonment of a mentally-ill man under the Act for refusing to hand over
his PGP passphrase when the Met's terror squad told him to.
These issues have come round once more with GCHQ's Interception
Modernisation Programme, a plan to centralise all traffic data first in a
central database and more recently in a system of federated databases maintained
by communications service providers. FIPR wrote a response to a Home Office
consultation on this, and another response to one on the orders and
codes of practice for interception. We also commented on the Cabinet
Office's (actually GCHQ's) inappropriate proposals to secure government systems.
Terrorism: A page with Comments on Terrorism
explains why many of the measures that various people have been trying to sell
since the 11th September attacks are unlikely to work as promised. Much
subsequent policy work has been made harder by assorted salesmen, centralisers,
rent-seekers and chancers talking about terror; I testified against police
attempts to increase pre-charge detention to ninety days with the implausible
claim that they needed more time to decrypt seized data. We must constantly push
back on the scaremongers; here for example is a video I did on the effects
of 9/11.
Export Control: In 2001-02, FIPR persuaded the Lords to
amend the Export
Control Bill. This bill was designed to give ministers the power to license
intangible exports. It was the result of US lobbying of Tony Blair in 1997;
back then, UK crypto researchers could put source code on our web pages while
our US colleagues weren't allowed to. In its original
form, its provisions were so broad that it would have given ministers the
power of pre-publication review of scientific papers. We defeated the Government
in the House of Lords by 150-108, following a hard campaign – see press
coverage in the BBC,
the New
Scientist, the Guardian
and the Economist, and an article on free
speech I wrote for IEEE Computing. But the best quote I
have is also the earliest. The first book written on cryptology in English, by
Bishop John Wilkins in 1641, remarked that ‘If all those
useful Inventions that are liable to abuse, should therefore be concealed,
there is not any Art or Science which might be lawfully profest’
This issue revived in 2003, with a government attempt to wrest back using
regulations much of what they conceded in parliament. FIPR fought back
and extracted assurances
from Lord Sainsbury about the interpretation of regulations made under the
Act. Without our campaign, much scientific collaboration would have become
technically illegal, leaving scientists open to arbitrary harrassment. Much
credit goes to the Conservative frontbencher Doreen
Miller, Liberal Democrat frontbencher Margaret
Sharp, and the then President of the Royal Society Bob May,
who made his maiden speech in the Lords on the issue and marshalled the
crossbenchers. We are very grateful for their efforts.
Trusted Computing was a focus in 2002-03. I wrote a Trusted Computing FAQ
that was very widely read, followed by a study of the
competition policy aspects of this technology. This led inter alia to a symposium organised by the German government which
in turn pushed the Trusted Computing Group into incorporating, admitting small
companies, and issuing implementation guidelines. Trusted Computing appears to
have fizzled out because Microsoft couldn't get remote attestation to work; the
only thing the TPM is used for in Windows Vista is hard disk encryption.
IP Enforcement: Our top priority in 2003-04 was the EU IPR enforcement
directive, which has been succinctly described as DMCA on
steroids and criticised by
distinguished lawyers. Our lobbying got it amended to remove
criminal sanctions for patent infringement and legal protection for devices such
as RFID tags. This law was supported by the music industry,
the luxury brands, and (initially) Microsoft, while the coalition that we
put together to oppose it included the phone companies, the supermarkets, the
generic drugmakers, the car parts industry, smaller software firms and the free
software community. The press was sceptical – in Britain, France and even America. The issue was even linked to a boycott of
Gillette. There is more on my blog.
This was a watershed in copyright history: the IP lobby was never going to
be stopped by fine words, only by another lobby pushing in the other direction,
and the Enforcement Directive was when that first came together. It also led to
the birth of EDRI, European Digital Rights, a
confederation of European digital-rights NGOs, whose establishment was one of
FIPR's significant achievements. EDRI's first campaign was against the IP
Enforcement Directive; afterwards FIPR and EDRI established a common position on intellectual
property. Since then I have given evidence to the Gowers Review of IP
and a parliamentary
committee on DRM; however the lead UK NGO on IP nowadays is the Open Rights Group.
My pro-bono work has included sitting on Council, our University's governing
body. I stood for election in 2002 because of incidents like this; to stop
such things happening again, we founded the Campaign for Cambridge
Freedoms, and campaigned against a proposal that most of the intellectual
property generated by faculty members - from patents on bright ideas to books
written up from lecture notes - would belong to the university rather than to
its creator. The final vote
approved a policy according to which academics keep copyright but the University
gets 15% of patent royalties. I got re-elected in 2006, and
in my second term we
won an important vote to protect
academic freedom. For more, see my article from
the Oxford Magazine, and my Unauthorised
History of Cambridge University.
Finally, here is my PGP
key. If I revoke this key, I will always be willing to explain why I have
done so provided that the giving of such an explanation is lawful. (For
more, see FIPR.)
Security engineering is about building systems to remain dependable in the face
of malice, error or mischance. As a discipline, it focuses on the tools,
processes and methods needed to design, implement and test complete systems,
and to adapt existing systems as their environment evolves. My book has become
the standard textbook and reference since it was published in 2001. You can
download the first edition without charge here.
Security engineering is not just concerned with infrastructure matters such as
firewalls and PKI. It's also about specific applications, such as banking and
medical record-keeping, and about embedded systems such as automatic teller
machines and burglar alarms. It's usually done badly: it often takes several
attempts to get a design right. It is also hard to learn: although there were
good books on a number of the component technologies, such as cryptography and
operating systems, there was little about how to use them effectively, and even
less about how to make them work together. Most systems don't fail because the
mechanisms are weak, but because they're used wrong.
My book was an attempt to help the working engineer to do better. As well as
the basic science, it contains details of many applications – and lot of
case histories of how their protection failed. It describes a number of
technologies which aren't well covered elsewhere. The first edition was pivotal
in founding the now-flourishing field
of information security
economics: I realised that the narrative had to do with incentives and
organisation at least as often as with the technology. The second edition
incoporates the economic perspectives we've developed since then, and new
perspectives from the psychology of security, as well as updating the
technological side of things.
I only referee for open publications, so I discard emails asking for reports
for journals that sit behind a paywall. I also usually discard emails sent by
people's secretaries: if you can't be bothered to email me yourself, then I
can't be bothered to answer myself either.
If you want to do a PhD, please read our web pages first; we get lots
of CV spam which we delete. I also discard emails that ask for internships; we
can't employ overseas students on Tier 4 visas any more. If you're interested
in coming as an undergrad, have a look at our video.