I'm a postdoctoral Research Associate at the Computer Laboratory at the University of Cambridge. I'm working within the REMS project. Within the Laboratory, I participate in both the Programming, Logic and Semantics group and the Systems Research Group (particularly its Networks and Operating Systems subgroup).
Contact information is at the bottom of this page.
I am a practical computer scientist with wide interests. My goal is to make it easier and cheaper to develop useful, high-quality software systems. So far, my work has mostly focused on programming languages and the systems that support them—including language runtimes and operating systems.
Most of my research concerns programming, but I identify as a “systems” researcher. For me, “systems” is a mindset rather than a research topic. It means I am primarily interested in the practical consequences of an idea; its abstract properties are of interest only subserviently. (If you can come up with a better definition, let me know.) I generally hesitate to identify as a “programming languages” researcher, not because I dislike languages but because they're only one piece of the puzzle. I'd identify as a “programming systems” researcher if that were a thing.
In recent years since my PhD, I've spent various spells working for Oracle Labs, the University of Lugano, and the Univeristy of Oxford. See my biography section for more.
You can read more below about what I'm working on. A recurring theme is that I prefer pushing innovations into the software stack at relatively low levels, to the extent that this is sensible (which I believe can be surprising). Another is the pursuit of infrastructure that is well-integrated, observable, simple, flexible, and designed for human beings—particularly if we believe (as I do) that computers are for augmenting the human intellect, not just for high-performance data crunching.
I did my Bachelor's degree in computer science in Cambridge, graduating in 2005. I then stayed on for a year as a Research Assistant, before starting my PhD in 2006.
My PhD work was based in the Networks and Operating Systems group under the supervision of Dr David Greaves, and centred on the problem of building software by composition of ill-matched components, for which I developed the Cake language.
During summer 2007 I took an internship at Fraser Research, doing networking research.
From January 2011 until March 2012 I was a research assistant in the Department of Computer Science at the University of Oxford. There I worked as a James Martin Fellow, within the research programme of the Oxford Martin School's Institute for the Future of Computing. My work mostly focused on constucting a continuum between state-space methods of program analysis (notably symbolic execution) with syntactic methods (such as type checking). (This work was rudely interrupted by fate, but will be revived eventually.)
From May 2012 to May 2013, I was a postdoctoral researcher at USI's Faculty of Informatics in Lugano, Switzerland, within the FAN project.
From May until October 2013 I was temporarily a Research Assistant at Oracle Labs, on the Alphabet Soup project.
In my industrial life, I enjoyed some spells working for Opal Telecom and ARM around my Bachelor's studies. More recently, I have been consulting for Ellexus, and conducted some industrial research work (in reality rather development-heavy) for Oracle Labs.
You can find a version of my CV here, though be aware that it may be a little rough or incomplete at times. If you're recruiting for jobs in the financial sector, even technical ones, please don't bother reading my CV or contacting me—I promise I am not interested.
Here's what I'm working on as of April 2016. In most cases these are my own individual projects; a few are with collaborators, as I describe.
Unix-like processes are with us to stay, but are impoverished in ways that hold us back. It is too difficult to build robust tools and systems which automate or abstract meta-level programming tasks, such as debugging, profiling, visualisation, instrumentation, composition, error reproduction and dynamic update. In turn, this massively hampers software development generally! In large part, the root problem is that the state of a Unix process is too opaque. Unix lacks adequate meta-level facilities for describing user-supplied abstractions such as data representation, memory layouts, procedural interfaces, file formats, inter-process protocols, and so on. To fix this, rather than designing a whole new system with strong meta-level abstractions, as has been done in Smalltalk and Lisp programming systems, my work chooses instead to evolve the services offered by existing Unix-like systems. Since meta-level tasks are far from uncommon, vestigial features to help carry them out can often be found “lurking” in modern Unix. By extending and evolving these features, my work can (only half-jokingly) be said to be “dragging Unix into the 1980s”. It does so while maximising compatibility, allowing new benefits to be applied to old code, in whatever language—the work spans C and C++ right up to JavaScript, Python and OCaml. One output of this work is liballocs, a run-time library fast whole-process tracking of per-allocation metadata, including detailed type information. Another is libdlbind, which helps make JIT-compiled code visible to debuggers and dynamic loaders. Both of these are building blocks used by my other research efforts (below). Motto: if you must unify, unify at the meta-level of the operating system, not at the base language level or in language implementations.
Huge amounts of code have been written, and continue to be written, in unsafe programming languages including C, C++, Objective-C, Fortran and Ada. Performance is one motivator for choosing these languages; continuity with existing code, particularly libraries, is another. The experience of compiling and debugging using these languages is a patchy one, and often creates large time-sinks for the programmer in the form of debugging mysterious crashes or working around tools' limitations. I have developed the libcrunch system for run-time type checking and robust bounds checking. This consists of a runtime library based on liballocs, together with compilers wrappers and other toolchain extensions for instrumenting code and collecting metadata. Slowdowns are typically in the range 0–35% for type checking and 15–100% for bounds checking. The system is also unusually good at supporting use of uninstrumented libraries, continuing after error, and supporting multiple languages. Its design is a nice illustration of an approach I call toolchain extension: realising new facilities using a combination of source- and binary-level techniques, both pre- and post-compilation, together with the relevant run-time extensions. Critically, it re-uses the stock toolchain to the greatest possible extent, and takes care to preserve binary compatibility with code compiled without use of these extensions. The mainline implementation uses the CIL library; during 2014—15, Chris Diamand developed a Clang front-end, with help from David Chisnall. Motto: unsafe languages allow mostly-safe implementations; combine compile- and run-time techniques to make this the pragmatic option
The semantics of programming languages are well studied. Two equally crucial but much less-studied areas are the semantics of linking and (separately) of system calls. The challenges of verified toolchains and verified run-time environments depend critically on accurate specifications for these services. With my colleagues on the REMS project, we're working both on detailed specifications of ELF linking and Linux system call interface. This also means developing tools for experimentally testing and refining these specifications. In summer 2015 Jon French developed tools for specifying system calls in terms of their memory footprints. Motto: a specification for every instruction.
Debugging native code often involves fighting the tools. Programmers must trust to luck that the debugger can walk the stack, print values or call functions; when this fails, they must work with incomplete or apparently impossible stack traces, “value optimized out” messages, and the like. Obtaining debugging insight is even harder in production environments, where unreliable debugging infrastructure risks both downtime and exploits (see Linus Torvalds in his thread, among others). With REMS colleagues and Francesco Zappa Nardelli, we're working on tools and techniques for providing a robust, high-quality debugging experience. This comes in several at its simplest, it means sanity-checking compiler-generated debugging information. At its most complex, it means propagating more information across compiler passes, to enable debugging information that is correct by construction. Debugging is also in tension with compiler optimisation, and is also inherently in need of sandboxing (to limit who can consume what application state). Motto: if it compiles, it should also debug.
As of April 2016, this work is at an early stage.
Language barriers are holding back programming. They splinter the ecosystem into duplicated fragments, each repeating work. Foreign function interfaces (FFIs) are the language implementers' attempt at a solution, but do not serve programmers' needs. FFIs must be eliminated! Instead, using a descriptive plane (again, provided by liballocs), we can make higher-level language objects appear directly within the same metamodel that spans the entire Unix process, alongside lower-level ones, which are mapped into each language's “object space”. For example, in my hacked nodeJS (quite a bit of V8 hacking was required too), all native functions visible to the dynamic linker are accessible by name under the existing process object. You're no longer dependent on hand-written C++ wrapper code, nor packages like node-ffi to access these objects. The type information gathered by liballocs enables direct access, out-of-the-box. It's far from a robust system at the moment, but the proof-of-proof-of-concept-concept is encouraging. Motto: FFIs are evil and must be destroyed, but one VM need not (cannot) rule them all.
Programmers often want to develop custom tooling to help them profile and debug their applications. On virtual machines like the JVM, this traditionally means messing with low-level APIs such as JVMTI, or at best, frameworks like ASM which let you rewrite the application's bytecode. In Lugano, work done together with my DAG colleagues proposed a new programming model for these tasks based on aspect-oriented primitives (join points and advice), and also a yet higher-level model based on an event processing abstraction reminiscent of publish-subscribe middleware. On the run-time side, our infrastructure addresses the critical problem of isolation: traditionally, instrumentation and application share a virtual machine, and may interfere with one another in ways that are difficult to predict and impossible, in general, to avoid except by arbitrarily excluding shared classes (such as java.lang.* and the like). Our work gets around this by avoiding the very concept of bytecode instrumentation—instead we perform instrumentation using a straight-to-native path that hands off run-time events to a separate process (the “shadow VM”). This allows much stronger coverage properties than are possible with bytecode instrumentation, and nicely handles distributed applications, while also bringing challenges with event synchronisation and ordering. Motto: robust, high-coverage without bytecode munging or hacky exclusion lists.
As a by-product of my work on reflective metadata in Unix processes, I've built various data structures for fast indexing of in-memory structures by their virtual address. The key idea is to use virtual address translation itself as the first stage of any lookup. This approach has its roots in folklore, but some of the extensions and variants I've built seem to be novel. Combined, they add a complement of abilities including support for both wide and narrow range queries, iteration, and efficiency under differing degrees of sparseness and clusteredness in the key-space usage. These structures are pleasingly simple to code, and have been known to offer significant out-of-the-box performance improvements relative to more complex hand-rolled structures. Their overall performance characteristics are, however, complex and non-obvious. A personal research project of mine is to explore, measure characterise these characteristics. Motto: virtual memory is hardware-assisted associative lookup.
This work is at an early stage.
Programming by composition is the norm, but the norm also imposes needlessly high costs on developers. Keeping up with APIs as they evolve is a familiar drain on programmers' time. Inability to migrate to a different underlying library or platform is a familiar inhibitor of progress. It doesn't need to be this way. Why isn't software soft, malleable and composable? Why is so much tedious work left to the human programmer? My PhD work explored an alternative approach in which software is composed out of units which aren't expected to match precisely. A separate bunch of languages and tools, collectively called an “integration domain”, are provided to tackle the details of resolving this mismatch. I designed the language Cake to provide a rule-based mostly-declarative notation for API adaptation. The Cake compiler accepts a bunch of rules relating calls and data structure elements between APIs, including in various context-sensitive fashions. The Cake compiler then generates (nasty C++) wrapper code, saving the programmer from writing wrappers by hand. Motto: down with manual glue coding, down with silos!
A recurring theme of my work is that I like to ask big questions, including questions of the form “what is X?” (for various X). The X so far have been the following.
Abstract: Existing approaches for detecting type errors in unsafe languages are limited. Static analysis methods are imprecise, and often require source-level changes, while most dynamic methods check only memory properties (bounds, liveness, etc.), owing to a lack of run-time type information. This paper describes libcrunch, a system for binary-compatible run-time type checking of unmodified unsafe code, currently focusing on C. Practical experience shows that our prototype implementation is easily applicable to many real codebases without source-level modification, correctly flags programmer errors with a very low rate of false positives, offers a very low run-time overhead, and covers classes of error caught by no previously existing tool.
This is a technical paper describing a system (called libcrunch) for run-time type checking in native code (C, for now). Perhaps the most important message of the paper, albeit not strictly its novelty, is that we can check properties of pointers without attaching metadata to the pointers themselves. Indeed, propagating metadata with pointers is quite expensive, whereas looking up properties of the pointed-to objects can be made quite cheap.
Abstract:
Beneath the surface, software usually depends on complex linker
behaviour to work as intended. Even linking
hello_world.c is surprisingly involved, and systems software
such as libc and operating system kernels rely on a host of
linker features. But linking is poorly understood by working
programmers and has largely been neglected by language researchers.
In this paper we survey the many use-cases that linkers support and
the poorly specified linker speak by which they are controlled:
metadata in object files, command-line options, and linker-script
language. We provide the first validated formalisation of a realistic
executable and linkable format (ELF), and capture aspects of the
Application Binary Interfaces for four mainstream platforms (AArch64,
AMD64, Power64, and IA32). Using these, we develop an executable
specification of static linking, covering (among other things) enough to link
small C programs (we use the example of bzip2) into a correctly
running executable. We
provide our specification in Lem and Isabelle/HOL forms. This is the
first formal specification of mainstream linking. We have used the
Isabelle/HOL version to prove a sample
correctness property for one case of AMD64 ABI relocation,
demonstrating that the specification supports formal proof, and as a
first step towards the much more ambitious goal of verified linking.
Our work should enable several novel strands of research, including
linker-aware verified compilation and program analysis, and better
languages for controlling linking.
This is a technical paper about a formal but realistic (non-idealised) specification of Unix linking. Its focus is mostly on static linking of ELF files, although not exclusively. It describes our ongoing effort to produce an executable formal specification of linking that can be used as a link checker, a basis for mechanised reasoning about compiler and linker implementations, and as a document for developers to consult. Most critically perhaps, it explains why linking matters: why linking is not just about separate compilation but actually affects program semantics. It details a large zoo of “linking effects” programmers can achieve by (and often only by) instructing the linker.
Abstract: Dynamic program analyses, such as profiling, tracing and bug-finding tools, are essential for software engineering. Unfortunately, implementing dynamic analyses for managed languages such as Java is unduly difficult and error-prone, because the runtime environments provide only complex low-level mechanisms. Currently, programmers writing custom tooling must expend great effort in tool development and maintenance, while still suffering substantial limitations such as incomplete code coverage or lack of portability. Ideally, a framework would be available in which dynamic analysis tools could be expressed at a high level, robustly, with high coverage and supporting alternative runtimes such as Android. We describe our research on an “all-in-one” dynamic program analysis framework which uses a combination of techniques to satisfy these requirements.
This is an accessible introduction to the DiSL and ShadowVM systems for writing dynamic analyses for Java programs. It also covers recent developments in supporting a less cooperative runtime, namely Android's Dalvik VM, which brings particular challenges.
Abstract: Programmers face much complexity from the co-existence of “native” (Unix-like) and virtual machine (VM) “managed” run-time environments. Rather than having VMs replace Unix, we investigate whether it makes sense for Unix to “become a VM”, in the sense of evolving its user-level services to subsume services offered by VMs. We survey the (little-understood) VM-like features in modern Unix, noting common shortcomings: a lack of semantic metadata (“type information”) and the inability to bind from objects “back” to their metadata. We describe the design and implementation of a system, liballocs, which adds these capabilities in a highly compatible way, and explore its consequences.
This is a fairly technical paper introducing liballocs as a “missing link” in the design of operating systems. It describes some things we can already do (witness libcrunch) and things we expect to be able to do with it. It also puts liballocs in context, relating it both to the Unix tradition and to the Lisp- and Smalltalk-style virtual machine tradition.
Abstract: The concept of “type” has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of “type” have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of “type” where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems.
This is an essay I wrote out of discomfort in the vagueness of the word “type”, both in popular usage and among “expert” researchers. I finished it with the feeling that I'd just written “one to throw away”, so it may have been more illuminating for me to write than for you to read. Nevertheless I am very happy to receive comments. Subject to demand, I'll also be happy to accommodate any corrections (including corrections of omissions, where practical) in the author's version.
Abstract: Operating systems and programming languages are often informally evaluated on their conduciveness towards composition. We revisit Dan Ingalls' Smalltalk-inspired position that “an operating system is a collection of things that don't fit inside a language; there shouldn't be one”, discussing what it means, why it appears not to have materialised, and how we might work towards the same effect in the postmodern reality of today's systems. We argue that the trajectory of the “file” abstraction through Unix and Plan~9 culminates in a Smalltalk-style object, with other filesystem calls as a primitive metasystem. Meanwhile, the key features of Smalltalk have many analogues in the fragmented world of Unix programming (including techniques at the library, file and socket level). Based on the themes of unifying OS- and language-level mechanisms, and increasing the expressiveness of the meta-system, we identify some evolutionary approaches to a postmodern realisation of Ingalls' vision, arguing that an operating system is still necessary after all.
This is a bit of a curiosity. It's a position paper which I wrote after Michael Haupt pointed me at the cited Ingalls article. Every so often, the idea emerges that we don't need operating systems if we have language runtimes. This idea doesn't seem to stick, but it's hard to pin down what's wrong with it. Here I argue that a neglected value of operating systems is in their meta-abstractions, of which the Unix file is one classic example. These abstractions are not part of any language (or part of all of them, depending on how you see it). In fact, I argue the OS could and should provide much more of a meta-system. The paper makes a number of contrasts between Unix files and Smalltalk objects, then proposes a few concrete suggestions for evolutionarily improving the Unix-like metasystem towards a more Smalltalk-like one.
Abstract: Accuracy, completeness, and performance are all major concerns in the context of dynamic program analysis. Emphasizing one of these factors may compromise the other factors. For example, improving completeness of an analysis may seriously impair performance. In this paper, we present an analysis model and a framework that enables reducing analysis overhead at runtime through adaptive instrumentation of the base program. Our approach targets analyses implemented with code instrumentation techniques on the Java platform. Overhead reduction is achieved by removing instrumentation from code locations that are considered unimportant for the analysis results, thereby avoiding execution of analysis code for those locations. For some analyses, our approach preserves result accuracy and completeness. For other analyses, accuracy and completeness may be traded for a major performance improvement. In this paper, we explore accuracy, completeness, and performance of our approach with two concrete analyses as case studies.
This paper describes a mechanism for dynamically undeploying and redeploying bytecode instrumentation, and its application to some scenarios where this can usefully optimise an analysis without sacrificing too much correctness or completeness. My personal favourite thing about it is the observation that analysis maintain collections of repeatedly mutated values (profile data, monitors, etc.), and the transition rules obeyed by these mutations give us insight into when instrumentation can be removed. We can only exploit this in a very limited set of cases at present, but the potential for future work is interesting.
Abstract: Dynamic analysis tools are often implemented using instrumentation, particularly on managed runtimes including the Java Virtual Machine (JVM). Performing instrumentation robustly is especially complex on such runtimes: existing frameworks offer limited coverage and poor isolation, while previous work has shown that apparently innocuous instrumentation can cause deadlocks or crashes in the observed application. This paper describes ShadowVM, a system for instrumentation-based dynamic analyses on the JVM which combines a number of techniques to greatly improve both isolation and coverage. These centre on the offload of analysis to a separate process; we believe our design is the first system to enable genuinely full bytecode coverage on the JVM. We describe a working implementation, and use a case study to demonstrate its improved coverage and to evaluate its runtime overhead.
This paper describes a system for dynamic analysis of Java bytecode programs in a highly isolated fashion, by running the analysis in a separate process, adding only straight-to-native instrumentation paths within the Java bytecode itself. This sidesteps several coverage and robustness issues that plague in-VM analysis, as detailed by our VMIL 2012 workshop paper. The approach presented in this paper is probably the best you can do in a way that's (notionally) portable across JVMs. Its performance is sadly not great, but the next step (how to modify the VM's interfaces to support more efficient approaches) is set to be very interesting. This paper also gives an initial treatment of sychronisation requirements (i.e. relative ordering of events in the observed program w.r.t. their observation by the analysis), which vary hugely from one analysis to another and have a critical influence on achievable performance.
Abstract: The Java Virtual Machine (JVM) today hosts implementations of numerous languages. To achieve high performance, JVM implementations rely on heuristics in choosing compiler optimizations and adapting garbage collection behavior. Historically, these heuristics have been tuned to suit the dynamics of Java programs only. This leads to unnecessarily poor performance in case of non-Java languages, which often exhibit systematic difference in workload behavior. Dynamic metrics characterizing the workload help identify and quantify useful optimizations, but so far, no cohesive suite of metrics has adequately covered properties that vary systematically between Java and non-Java workloads. We present a suite of such metrics, justifying our choice with reference to a range of guest languages. These metrics are implemented on a common portable infrastructure which ensures ease of deployment and customization.
This paper describes an effort to extend the space of dynamic metrics defined at the JVM level so that they cover dimensions which exhibit variation between Java and non-Java languages running on the JVM. It describes both a new selection of metrics, and an infrastructure on which they are implemented. One of the parts I find neat is the query-based definition of a large subset of the metrics. Currently this is mostly a convenience, but I hope that in future work, this will prove useful for identifying new kinds of profiling information which is cheap enough to collect and use during dynamic compilation. (In other words, we want cheap observations which do a reasonable job of predicting the workload properties characterised by more expensive metrics, like the ones in this paper.)
Abstract: Dynamic program analysis tools based on code instrumentation serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Unfortunately, constructing new analysis tools is unduly difficult, because existing frameworks offer little or no support to the programmer beyond the incidental task of instrumentation. We observe that existing dynamic analysis tools re-address recurring requirements in their essential task: of maintaining state which captures some property of the analysed program. This paper presents a general architecture for dynamic program analysis tools which treats the maintenance of analysis state in a modular fashion, consisting of mappers decomposing input events spatially, and updaters aggregating them over time. We show that this architecture captures the requirements of a wide variety of existing analysis tools.
This paper is a move towards enabling a more event-driven, reactive style of programming in the creation of dynamic analysis tools. We built an analysis framework, FRANC, based around a library of instrumentation primitives (event sources) together with higher-level “state-oriented” components for decomposing and aggregating events. The neatest idea in the paper is to separate the spatial decomposition of incoming events from their temporal aggregation. This allows a “thin waist” hourglass design which opens up greater opportunity for composition and recombination of independent components.
Abstract: Bytecode instrumentation is a preferred technique for building profiling, debugging and monitoring tools targeting the Java Virtual Machine (JVM), yet is fundamentally dangerous. We illustrate its dangers with several examples gathered while building the DiSL instrumentation framework. We argue that no Java platform mechanism provides simultaneously adequate performance, reliability and expressiveness, but that this weakness is fixable. To elaborate, we contrast internal with external observation, and sketch some approaches and requirements for a hybrid mechanism.
Shortly after joining USI, it became clear that a lot of our group's implementation work was concerned with working around shortcomings with the JVM. Moreover, we had seen some worrying problems with no apparent solution. The reason is that the JVM platform actively encourages tool construction by bytecode instrumentation. But writing tools which instrument bytecode safely are invariably hard and often (depending on the instrumentation) impossible. I don't just mean impossible for non-specialist programmers to get right; I mean outright impossible. Specifically, there is no way to avoid the risk of introducing real and disastrous bugs to the instrumented program. The reason is that instrumentation code is not protected from the base program, and runs at near-arbitrary points, so easily creates deadlock and reentrancy problems, which there is no mechanism to guard against. (Unlike in process-level approaches, “avoid shared data” is not an option. Even dropping to native code doesn't guarantee anything.) There are several more modest difficulties also. This paper is my attempt at turning my colleagues' war stories into a manifesto for VM improvements. It argues that instrumentation is a bad mechanism on which to base VM observation interfaces, and sketches out some alternatives. I take the opportunity to grind one or two personal axes, including the one about how in-VM debugger servers are a step backwards from compiler-generated debugging information.
Abstract: Current VM designs prioritise implementor freedom and performance, at the expense of other concerns of the end programmer. We motivate an alternative approach to VM design aiming to be unobtrusive in general, and prioritising two key concerns specifically: foreign function interfacing and support for runtime analysis tools (such as debuggers, profilers etc.). We describe our experiences building a Python VM in this manner, and identify some simple constraints that help enable low-overhead foreign function interfacing and direct use of native tools. We then discuss how to extend this towards a higher-performance VM suitable for Java or similar languages.
This is a short paper about some in-progress work that was begun with Conrad Irwin when he was a final-year undergraduate in Cambridge and I was his project supervisor. Conrad implemented a usable subset of Python which uses DWARF debugging information to interface with native libraries, rather than the usual wrapper generation step. This paper extends that idea in a few ways, culminating in a design which can (we hope!) allow native tools (gdb, Valgrind, etc.) to operate seamlessly across programs written in a diverse mixture of languages, while avoiding conventional foreign function interfacing overheads. I am continuing the implementation effort.
Abstract: Tools for composing software impose homogeneity requirements on what is composed—that modules must share a language, target the same libraries, or share other conventions. This inhibits cross-language and cross-infrastructure composition. We observe that a unifying representation of software turns heterogeneity of components into a matter of styles: recurring interface patterns that cross-cut large numbers of codebases. We sketch a rule-based language for capturing styles independently of composition context, and describe how it applies in two example scenarios.
This is a short paper describing an idea which I developed in the last technical chapter of my PhD thesis. The idea is that interfaces differ stylistically—in things like how they pass arguments, how they report errors, and in some less routine ways—and that by capturing these styles independently of composition context, we could achieve a lot more compositions with a lot less programmer effort. It's pitched as an extension to Cake; so far it's unimplemented, with the exception of a few foundations in the compiler; I intend to implement it one day, but have no idea when I'll get time. The most compelling content of the paper is therefore the preliminary survey (read: brainstorm) of styles in well-known interfaces most readers will recognise.
Abstract: Software's expense owes partly to frequent reimplementation of similar functionality and partly to maintenance of patches, ports or components targeting evolving interfaces. More modular non-invasive approaches are unpopular because they entail laborious wrapper code. We propose Cake, a rule-based language describing compositions using interface relations. To evaluate it, we compare several existing wrappers with reimplemented Cake versions, finding the latter to be simpler and better modularised.
This is a long research paper describing the design, implementation and evaluation of the basic Cake language and runtime.
Abstract: Conventional tools yield expensive and inflexible software. By requiring that software be structured as plug-compatible modules, tools preclude out-of-order development; by treating interoperation of languages as rare, adoption of innovations is inhibited. I propose that a solution must radically separate the concern of integration in software: firstly by using novel tools specialised towards integration (the “integration domain”), and secondly by prohibiting use of pre-existing interfaces (“interface hiding”) outside that domain.
This is a relatively “long view” on the position underlying my PhD work, expounding my gut feeling that the way we build software is bizarrely fragile and unrealistic, owing to the expectation that big pieces of software should be made from perfectly-fitting smaller pieces without any sort of “glue” or integration material. This is what makes software maintenance expensive, software evolution difficult and software functionality inflexible. I'm always especially glad to receive comments on the argument and general story presented by this paper.
Abstract: Existing black-box adaptation techniques are insufficiently powerful for a large class of real-world tasks. Meanwhile, white-box techniques are language-specific and overly invasive. We argue for the inclusion of special-purpose adaptation features in a configuration language, and outline the benefits of targetting binary representations of software. We introduce Cake, a configuration language with adaptation features, and show how its design is being shaped by two case studies.
This is a short work-in-progress paper describing one part of my PhD work. The idea is that composition of software in binary form is desirable, both for convenience and because binaries offer a somewhat language-neutral model of software. The problem of linking together mismatched components in binary form, specifically at the object code level (meaning these components could be written in any number of languages including C and C++), has had little attention so far. However, a fairly descriptive model of binaries is already provided by debugging info standards like DWARF. The thesis of this work is that a domain-specific configuration language, with its baseline at the DWARF level of abstraction, can be a practical tool for describing compositions of mismatched binaries, where the language explicitly provides for adaptation to address mismatches in a black-box fashion. The main contribution is describing the proposed features of our language, which is called Cake, and the case-study experiences which are shaping it.
Writing a large survey is a task only a fool would undertake... so I had a go. Practical software adaptation techniques means techniques for bridging or modifying the interfaces (mostly...) of software components. It doesn't include adaptive techniques, as found in reflective and/or mobile middleware. The paper is unfortunately a bit disjointed, coming in two halves: first, it covers twentyish approaches found in the literature, and classifies them on what they do and how. Secondly it then discusses (borrowing some bits from my earlier workshop paper) various other aspects of such techniques: adapting component instances versus implementations, static versus dynamic components, elegance of model versus supporting heterogeneity, the utility of various distinctions between computational and communicational code, and what “loosely coupled” really means. In hindsight, a better survey would take all these latter issues and fold them into the first half's taxonomy, visiting the work in some order which allows elaboration on the more discursive matters as they are raised. It also needs a neater way, most likely a better graphical language, to describe the taxonomised nature of each technique, rather than relying on somewhat impenetrable verbal descriptions.
Abstract: Existing work on software connectors shows significant disagreement on both their definition and their relationships with components, coordinators and adaptors. We propose a precise characterisation of connectors, discuss how they relate to the other three classes, and contradict the suggestion that connectors and components are disjoint. We discuss the relationship between connectors and coupling, and argue the inseparability of connection models from component programming models. Finally we identify the class of configuration languages, show how it relates to primitive connectors and outline relevant areas for future work.
ACM copyright notice: © ACM, 2007. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SYANCO ’07. http://doi.acm.org/10.1145/1294917.1294918
This is a somewhat strident and sweeping workshop paper where I try to decode the meaning and intentions of the many and various references to “software connectors” found in the literature. I then talk a bit about how this relates to coordination, adaptation and the concept of coupling, and give a sketchy outline of how I think compositional and communication-oriented concerns could or should be abstracted and supported by tools. Apart from the latter, there's not really any research contribution, except an attempt to hold to account the vague and flimsy motivational arguments that appear in the introductions of lots of (often otherwise good) papers.
See also my author page on DBLP (with much credit to Michael Ley for this superb service).
This is quite a nice introduction, albeit brief and rather dense, to my PhD work. Or at least the problem, approach, ideas etc., since not much actual work is mentioned....
This is a thesis proposal I submitted after about 12 months of PhD work. It reads nicely enough but is far too ambitious!
My research interests are outlined at the top of this page. In summary: although broad, they fall mostly within the intersections of systems, programming languages and software engineering.
I keep a calendar of approximate submission deadlines and event dates for most of the conferences and workshops that I might conceivably attend, together with a few for which it's verging on inconceivable. In case it's useful, here it is.
For my PhD I worked on supporting software adaptation at the level of the operating system's linking and loading mechanisms. Here “adaptation” means techniques for connecting and combining software components which were developed independently, and therefore do not have matched interfaces. My emphasis was on techniques which are practical, adoptable and work with a wide variety of target code, at the (hopefully temporary) expense of safety and automation.
The main focus of my work was Cake, a special-purpose language for describing relations between the interfaces of binary components (specifically, relocatable object code). Cake makes heavy use of DWARF debugging information, and can be considered interesting in several ways: as a domain-specific rule-based programming language; as a “composition”, “configuration” or “linking” language; as a dynamic language; as a runtime system sharing commonalities with garbage collectors and debuggers. It does not really make contributions in the domains of module systems or linking models.
To find out more, please browse my publications, and do contact me for more information. There will hopefully be one or two more papers appearing on additional work that I did during my PhD years. My dissertation is now available too.
I'm very grateful to EPSRC and Cambridge Philosophical Society for the funds which supported my PhD research work and some related travel, and to the Graduate Research Fund and Emily & Gordon Bottomley Fund of Christ's College, EuroSys, The Royal Academy of Engineering, ACM SIGSOFT and ACM SIGPLAN for additional support of my research travel and conference attendance.
For now, my research “leadership” is confined to the student projects I've been known to supervise, which are in the Teaching section. I am always interested in working with enthusiastic Bachelor's, Master's and doctoral students. I maintain a list of project ideas, and am always happy to talk about other ideas.
During 2005–06 I was a Research Assistant in Cambridge on the XenSE and Open Trusted Computing projects, under Steven Hand. Both projects seek to implement a practical secure computing platform, using virtualisation (and similar technologies) as the isolation mechanism.
XenSE never had a web page of its own, but you might want to look at the abstract on the project's EPSRC Grant Portfolio page, or check out the mailing list.
OpenTC is a large EU-funded project involving many major industrial and academic partners, focused on the use of Trusted Computing Group technology to realise many common secure computing use cases.
As part of my work as an RA, I became interested in secure graphical user interfaces including L4's Nitpicker, a minimal secure windowing system. I began work on ports of this system to Linux, XenoLinux and the Xen mini-OS: the Linux version became mostly functional (but not yet secure!) while the others were stymied by various limitations with shared memory on Xen. These limitations are mostly fixed now, but I haven't had time to revisit the project since. Feel free to contact me for more information. If you wanted to take these up, I'd be glad to hear from you.
In the past year or so I have been on the PCs for SPLASH workshops (2016), for Onward! Essays (2015) and a reviewer for ACM TACO.
In the recent past I have been an external reviewer for the ASE journal, on the programme committee for PPPJ 2013, publicity chair for SC 2013, on the programme committee for RESoLVE 2012 at ASPLOS, and the shadow programme committee for Eurosys 2012.
Previously, I was privileged to contribute external reviews to SVT at SAC 2012, ESOP 2010 and EuroSys 2009.
I'm a member of the ACM, ACM SIGSOFT, SIGPLAN and SIGOPS.
In Cambridge, I coordinated the NetOS group talklets from January 2009 until January 2010. I also had a librarian-like role of curating a small group library and keeping a very vague track of the various books we had lying around (local users: see /usr/groups/netos/library). Finally, I looked after (in a rather neglectful fashion) the Atlas Room BBC Micro (about which I should write more some time).
I'm a Fellow of the Cambridge Philosophical Society.
As of May 2015, I am a postdoctoral affiliate at Christ's College (of which I'm also an alumnus, from my BA and PhD days).
Since December 2015 I have been serving on the University of Cambridge's Board of Scrutiny, which has an oversight role in governance and policy matters.
During spring 2011 I was a tutor for the Digital Systems course in Oxford.
In Cambridge I have supervised (tutored) many systems and programming courses from the Computer Science Tripos. The list below includes both current and past courses I supervised, together with any additional materials I prepared.
In April 2010 I gave a lecture to the MPhil in Advanced Computer Science class in Cambridge, as part of the Cambridge Programming Research Group mini-series within the Research Students' Lecture series. My lecture was entitled “Modularity – what, why and how”. Contact me for slides. Other lectures in the CPRG mini-series were given by Dominic Orchard, Max Bolingbroke and Robin Message.
During Michaelmas 2009, in Cambridge, I demonstrated the MPhil course Building an Internet Router, run by Andrew Moore.
I'm interested in supervising bright and enthusiastic Bachelor's and Master's students for their individual projects. For ideas and to find out what I'm interested in, see my list of suggested projects. I'm also always extremely happy to talk to students who have their own ideas.
I've supervised several projects in the past. Bachelor's students at Cambridge can read my thoughts about Part II projects, see the project suggestions for 2010–11 from me and others in the NetOS group, or from the entire Lab and beyond and contact me if you're interested. If you have your own ideas which you think I might make a good supervisor for, I'm always happy to talk about those too. Previously I've been fortunate enough to work with the following final-year students:
Note: as of late 2011, I have started using GitHub. Over time, code for all my larger projects will start to appear on my GitHub page. This page is likely to remain the more complete list, for the time being.
My research work involves building software. Inevitably, this software is never “finished”, rarely reaches release-worthy state, and usually rots quickly even then. So not much here is downloadable yet; this will improve over time, but in the meantime I list everything and invite you to get in touch if anything sounds interesting. My main projects, past and present, are:
I've also submitted small patches to various open-source projects including LLVM (bugfix to bitcode linker), binutils (objcopy extension), gcc (documentation fix for gcj), Knit (compile fixes), Datascript (bug fixes), DICE (bug fixes), pdfjam (support “page template” option) and Claws Mail (support cookies file in RSS plugin). Some of them have even been applied. :-)
Apart from my main development projects, I sometimes produce scripts and other odds and ends which might be useful to other people. Where time permits, I try to package them half-decently and make them available here.
I have written some scripts which attempt to retrieve decent BibTeX for a given paper (as a PDF or PostScript file). details
I've written a nifty script for printing papers, which helps people save paper, share printed-out papers and discover perhaps unexpected collaborators within their institution. details
I have a Makefile which downloads and compiles Tripos past-paper questions. It's pretty much self-documenting. Here it is.
I have built a sizable collection of vaguely useful shell scripts and other small fragments. One day “soon” I will get round to publishing them. The biggest chunks are my login scripts, which use Subversion to share a versioned repository of config files across all the Unix boxes that I have an account on, and the makefile and m4 templates that build this web page. I need to clean these up a bit. In the meantime, if you're interested in getting hold of them, let me know.
Occasionally I write down some thoughts which somehow relate to my work. They now appear in the form of a blog. Posts are categorised into research, and teaching, development, publishing and meta strands.
I have the beginnings of a personal web page. It's very sparse right now. Have a look, if you like.
| Office | FS13 |
| deduce from firstname.lastname@cl.cam.ac.uk | |
| Post | Dr Stephen Kell Computer Laboratory, University of Cambridge 15 JJ Thomson Avenue Cambridge, CB3 0FD United Kingdom |
Content updated at Tue 31 Jan 17:30:00 GMT 2017.
validate this page