Seminar series of the Mathematical Foundations group
Tuesdays 13.15 - 15.15 in CB 5.7
Seminars are open to all
More to come.
Thomas Strum (CNRS, France and MPI for Informatics, Germany)
Beautiful Decision Methods and Adventurous Heuristics for Solving Problems over the Reals
Effective quantifier elimination procedures for first-order theories provide a powerful tool for generically solving a wide range of problems based on logical specifications. In contrast to general first-order provers, quantifier elimination procedures are based on a fixed set of admissible logical symbols with an implicitly fixed semantics. This admits the use of subalgorithms from symbolic computation. We are going to start with traditional quantifier elimination applied to verification and simple problems from the life sciences. Beyond quantifier elimination we are going to discuss recent results on an incomplete decision procedure for the existential fragment of the reals, which has been successfully applied to the analysis of reaction systems in chemistry and in the life sciences, which scales to models currently used in systems biology. We might mention our open-source computer logic software Redlog, where our methods are implemented (www.redlog.eu).
James Brotherston (University College London)
Biabduction (and Related Problems) in Array Separation Logic
I describe array separation logic (ASL), a variant of separation logic in which the data structures are either pointers or arrays. This logic can be used, e.g., to give memory safety proofs of imperative array programs.
The key to automatically inferring specifications is the so-called "biabduction" problem, given formulas A and B, find formulas X and Y such that
A * X |= B * Y
(and such that A * X is also satisfiable), where * is the well-known "separating conjunction" of separation logic. We give an NP decision procedure for this problem that produces solutions of reasonable quality, and we also show that the problem of finding a consistent solution is NP-hard.
Along the way, we study satisfiability and entailment in our logic, giving decision procedures and complexity bounds for both problems.
This is joint work with Nikos Gorogiannis (Middlesex) and Max Kanovich (UCL). A paper describing the work is available at https://arxiv.org/abs/1607.01993 .
Thomas Cottrell (University of Bath)
Operads, generalised operads, and weak n-categories
Operads are tools for defining algebraic structures in terms of the operations they have. In this talk, I will describe the classical case of operads, in which each operation has a natural number of inputs, called its arity. I will then explain how to generalise this definition to allow for operads with more complex shapes of inputs. Finally, I will show how these generalised operads can be used to define weak n-categories, a very general type of higher-dimensional category.
Soichiro Fujii (The University of Tokyo)
Generalized Global States
From the outset, the global state has been among the leading examples in the algebraic approach to computational effects. Indeed, the approach itself started from the recognition that the global state admits a computationally natural presentation in terms of operations and equations.
In this talk, I attempt to shed new light on the global states by introducing a new class of computational effects which I tentatively call `generalized global states'. First, I explain that the now standard presentation of the global state monad on Set (in terms of the update and lookup operations) is a particular instance of a much more general phenomenon, whose first appearance essentially dates back to Lawvere’s thesis. Second, I present a unified operational semantics for generalized global states and state a relationship to Plotkin and Power's operational semantics based on effect values.
Martín Escardó (University of Birmingham)
Continuity in type theory
The formulation in Martin-Loef type theory of a Brouwerian continuity principle, saying that all functions (N->N)->N are continuous, via the so-called Curry-Howard interpretation of logic, turns out to be inconsistent. It becomes consistent under the univalent interpretation of logic, which is similar to that of topos logic. In particular, the notion of existence is interpreted as something strictly weaker than that Martin-Loef's Sigma type, but stronger than its classical manifestation as the negation of a universal quantifier. In fact, the original paper by Howard already points out that there are two natural constructive notions of existence (a weak and a strong one).
Pedro Henrique Carrasqueira (CLE, University of Campinas, Brazil)
Some introductory remarks to paraconsistency and logics of formal inconsistency
A logic is paraconsistent if inconsistencies do not imply triviality in it. Paraconsistent logics are thus suited for reasoning in the presence of ineliminable inconsistencies, and they produce non-trivial but inconsistent theories. There are two main philosophical traditions of studies of inconsistency and paraconsistent logic. One of them advocates for the position known as dialetheism: that some contradictions are true, or, equivalently (given classical negation), that some propositions are both true and false. There is, however, another, somewhat earlier tradition, which takes a very different approach to the matter of inconsistency. This tradition assumes, instead, paraconsistency to be a common trait of logics that are in some sense appropriate for situations of imperfect rationality or imperfect information. In particular, their research focuses on logics in which the presence of disagreement or misinformation can be expressed by a formal language itself. Collectively, the various logics developed by this tradition and having such a property are known as logics of formal inconsistency. In my talk I shall briefly discuss the main differences that keep apart these two traditions of paraconsistency, with special attention to the different theoretical and practical problems they aim to solve. Then I shall focus on logics of formal inconsistency, taking the logic called mbC as my main example of the kind of behavior such logics exhibit. I shall end with some remarks on the ongoing research in this tradition of paraconsistency, as well as on its possible further developments.
David Sherratt (University of Bath)
Towards an atomic abstract machine
The atomic lambda-calculus is known for its efficiency when evaluating terms; it is naturally fully lazy. Abstract machines are used to administer evaluation strategies. In this talk, I will discuss the idea of building an atomic abstract machine, implementing the evaluation strategy for the atomic lambda-calculus, with the intent of developing a fully lazy abstract machine and proving it to be effective.
Alessio Santamaria (University of Bath)
Looking for a categorical notion of substitution of atomic flows
In this talk I present the work I have done in my first year: trying to give a categorical understanding of atomic flows in order to study their algebra - in the technical sense of the term, that is the operations we can do with them. Apart from plugging together and juxtaposing two flows, Guglielmi et al. proposed in an unpublished manuscript a notion of substitution of a flow inside a connected component of another, an operation which would generalise the usual notion of substitution of a formula inside the occurrences of an atom in another formula to that of substitution of a derivation inside the occurrences of an atom in another derivation. Categorically speaking, it seems that their idea can be formalised in terms of horizontal composition of families of morphisms, with which we interpret atomic flows. I will show how we generalised the well known definition of horizontal composition of natural transformations for a larger class of families of morphisms, namely extranatural transformations, in a meaningful way and what we have in mind to do for families of morphisms which are obtained by composing natural and extranatural transformations, but themselves are neither of them.
Giulio Manzonetto (Université Paris XIII)
New Results on Morris's Observational Theory --- The Benefits of Separating the Inseparable
We study the theory of contextual equivalence in the untyped lambda-calculus, generated by taking the normal forms as observables. Introduced by Morris in 1968, this is the original extensional lambda theory H+ of observational equivalence. On the syntactic side, we show that this lambda-theory validates the omega-rule, thus settling a long-standing open problem. On the semantic side, we provide sufficient and necessary conditions for relational graph models to be fully abstract for H+. We show that a relational graph model captures Morris's observational pre-order exactly when it is extensional and lambda-König. Intuitively, a model is lambda-König when every lambda-definable tree has an infinite path which is witnessed by some element of the model.
Pino Rosolini (University of Genoa)
Frames from topology, algebraically
We describe a connection between frames and algebras for the double exponential monad on the Sierpinski space. Instrumental for the presentation is Dana Scott's category Equ of equilogical spaces. We present a subcategory of Equ, closed under the double exponential monad, on which the category of algebras is equivalent to that of frames (and frame homomorphisms). I hope to connect this with Taylor's work on Abstract Stone Duality.
This is joint work with Giulia Frosoni and Alessio Santamaria.
Jamie Vicary (University of Oxford)
Geometrical Proofs for Linear Logic
Linear logic is a fundamental way to reason about resources that cannot be duplicated or deleted. In this talk, I will present a new approach to the proof theory of linear logic, in which proofs are represented as surfaces embedded in 3-dimensional space. Proof equivalence then has a simple definition: two proofs are logically equivalent just when their surfaces are geometrically equivalent. The technical basis for the work comes from higher category theory, and I will give a simple and accessible introduction to this.
Paul Harrenstein (University of Oxford)
Expressiveness and Nash Equilibrium in Iterated Boolean Games
We introduce and investigate a novel notion of expressiveness for temporal logics that is based on game theoretic properties of multi-agent systems. We focus on iterated Boolean games, where each player has a goal, represented using (a fragment of) Linear Temporal Logic (LTL). This goal captures the player's preferences: the models of the goal represent system behaviours that would satisfy the player. Moreover each player is assumed to act strategically, taking into account the goals of the other players, in order to bring about computations satisfying their goal. In this setting, we apply the standard game-theoretic concept of Nash equilibria: the Nash equilibria of an iterated Boolean game can be understood as a (possibly empty) set of computations, each computation representing one way the system could evolve if players chose strategies in Nash equilibrium. Such an equilibrium set of computations can be understood as expressing a temporal property—which may or may not be expressible within a particular LTL fragment. The new notion of expressiveness that we study is then as follows: what LTL properties are characterised by the Nash equilibria of games in which agent goals are expressed in fragments of LTL? We formally define and investigate this notion of expressiveness and some related issues, for a range of LTL fragments.
Harry Gunn (University of Bath, Masters Student)
This is a review of several papers by G. Grigoriev and V. Shpilrain on a novel approach to public key cryptography. These authors write:
"We use various laws of classical physics to offer several solutions of Yao’s millionaires’ problem without using any one-way functions. We also describe several informationally secure public key encryption protocols, i.e., protocols secure against passive computationally unbounded adversary. This introduces a new paradigm of decoy-based cryptography, as opposed to “traditional” complexity-based cryptography. In particular, our protocols do not employ any one-way functions.”
Sam Staton (University of Oxford)
Semantics for probabilistic programming
I'll talk about the semantics of probabilistic programming languages. This is an old subject, but recently probabilistic programming has attracted a lot of interest as a method of statistical modelling, through languages like Anglican and Church. These raise some new problems, such as how to combine continuous distributions with higher types. I'll describe our work on operational semantics and denotational semantics (based on sheaves and measurable spaces).
John Power (University of Bath)
Category theoretic semantics for theorem proving in logic programming: embracing the laxness
(joint with Ekaterina Komendantskaya)
A propositional logic program P may be identified with a PfPf-coalgebra on the set of atomic propositions in the program. The corresponding C(PfPf)-coalgebra, where C(PfPf) is the cofree comonad on PfPf describes derivations by resolution. Using lax semantics, that correspondence may be extended to a class of first-order logic programs without existential variables. The resulting extension captures proofs by term-matching resolution in logic programming. Refining the lax approach, we further extend it to arbitrary logic programs.
John Gowers (University of Bath)
Games with ordinal sequences of moves
I shall present a modification of the Abramsky-Jagadeesan games model to allow sequences of moves indexed by transfinite ordinals. The motivation for this construction is work arising from work by Laird and Churchill in [1,2] concerning the sequoid operator. In , the authors construct an exponential in the category of games that is a cofree commutative comonoid for the 'tensor on the left' functor and a final coalgebra for the 'sequoid on the left' functor. In the category of finite games and strategies, this exponential can be constructed from the sequoid functor as the limit of a diagram indexed by the ordinal w. If we try to extend this result to the 'sequoidal categories' introduced in , then we find that this construction does not always produce a final coalgebra, but that for natural categories of games a similar construction using a higher ordinal will work. If the lengths of plays in an ordinal game can be bounded by a limit ordinal k then we may construct the final coalgebra for the sequoid using a suitable diagram indexed by k. Conversely, if a game contains plays greater than an ordinal k then the limit of the natural diagram indexed by k does not have the natural structure of a coalgebra.
There is a sizeable body of research in the field of games with plays indexed by transfinite ordinals (sometimes called 'long games'). In , Itay Neeman presents results concerning whether or not such games are determined. More recently, Laird has applied a similar model to the study of unbounded determinism in . The construction given in this talk is a straightforward extension of the games model outlined in [1,5]. A nice feature of the construction is that it includes as special cases both the 'games with winning condition on infinite plays', given in , and the pure finite games introduced by Blatt in .
After shifting focus from finite to infinite ordinals, it becomes convenient to treat plays (ordinal sequences of moves), rather than moves, as primitive, and one possible formulation is to define a game to be a sheaf of sets on some given ordinal k, where the ordinal b < k is sent to the set of legal plays of length b. In contrast to the Abramsky-Jagadeesan model, in which moves are designated as Player-moves or Opponent-moves, we have a function that designates plays as Player-plays or Opponent-plays. If a play is indexed by a limit ordinal, then it has no last move, so this distinction is important. For example, in the case of games played over the ordinal w + 1, we are free to specify whether a play of length w should belong to Player or to Opponent, and this corresponds exactly to choosing a set of infinite plays that are Player-winning, as in .
I shall outline the motivation and construction for games played over transfinite ordinals, and shall discuss briefly some tentative questions about links to ordinals occurring elsewhere in the theory - in particular, game-value ordinals for winning positions and consistency-strength ordinals in proof theory.
: Martin Churchill, James Laird and Guy McCusker. Imperative programs as proofs via game semantics. LICS 2011: 65-74, June 2011
: James Laird. A Categorical Semantics of Higher Order Store, CTCS 2002. Proceedings of CTCS '02, Elsevier, 2002
: Itay Neeman. The Determinacy of Long Games. De Gruyter Series in Logic and its Applications, 1972
: James Laird. Sequential Algorithms for Unbounded Nondeterminism. MFPS XXXI: 271-287, 2015
: Samson Abramsky, Radha Jagadeesan. Games and full completeness for multiplicative linear logic. Journal of Symbolic Logic 59 (02): 543-574, 1994
: A. Blass. A game semantics for linear logic. Annals of Pure and Applied Logic, 56(1-3): 183 - 220, 1992
Willem Heijltjes (University of Bath)
Proof Nets and Complexity
In this talk I will give an overview of some recent and some very recent developments in linear logic proof nets.
Giuseppe Primiero (Middlesex University)
SecureND: Natural Deduction for Secure Trust
Applications in computational domains complement verified knowledge with information sharing processes. From a logical viewpoint, formulating assertion operations in terms of a trust function is challenging, both conceptually and technically. In this talk we overview SecureND, a natural deduction calculus for knowledge derivation under trust. Its design is motivated by the problem of trust transitivity. We present also its implementation as the Coq protocol SecureNDC, to deal with trusted sources in software management systems. We conclude with an overview of current and future extensions of our language.
Andrea Aler Tubella (University of Bath)
A generalised cut-elimination procedure through subatomic logic
Through subatomic logic we are able to present sufficient conditions for a proof system to enjoy cut-elimination. In this talk I will present subatomic logic, how it enables us to present proof systems that have single (linear) rule scheme and a recent result: we can generalise the splitting procedure for cut-elimination to any proof system whose rules and connectives have certain properties.
17 & 24 November
John Power (University of Bath)
I plan to give two talks about Lawvere theories. These are not for experts but rather to give the details. Lawvere introduced the notion in his PhD thesis in 1963, providing not only a category theoretic account of universal algebra but one that is also presentation-independent. Remarkably, his definition was embraced, albeit with a caveat and in different terms, by universal algebraists but not by category theorists. The latter, from 1966, generally preferred to model universal algebra owing to a little more generality but at a very considerable cost. Computer scientists were then influenced to adopt monads, but much could and has been gained by recasting some of the latter's concerns in terms of Lawvere theories. Ultimately, I think Lawvere theories are a superior approach, but benefit very much from the relationship with monads, and I duly plan to explain it.
Guillaume Munch-Maccagnoni (University of Cambridge)
Polarised realizability structures, models, and depolarisation
Polarisation describes the presence of an evaluation order, and is characterised denotationally by a non-associativity of compositions. We recently proposed a polarised, Curry-style approach to the λ-calculus with extensional sums, in correspondence with polarised intuitionistic logic. We suggested that associativity of composition in this context should not be seen as a syntactic axiom, but as an emergent property akin to termination. Traditionally, issues with sums in denotational semantics have rather been considered to be with extensionality than with the associativity. This will be explained in an introductory fashion in a first part.
In a second part, I will more formally relate the termination in the λ-calculus with sums to depolarisation, i.e. associativity of composition, or more familiarly the fact that the order of evaluation does not matter. First, a general setting of polarised realizability structures for polarised calculi with or without control operators is developed. Then, a general technique to build observational models from these structures is explained. Finally, under broad conditions, the observational models that the non-associative syntactic structure gives rise to satisfy the associativity of composition (and are therefore cartesian closed categories with binary co-products). I will sketch an analogy between intuitionistic depolarisation and parametricity.
Matthijs Vákár (University of Oxford)
Game Semantics for Dependent Types
Game semantics can act as a unifying semantic framework, providing compelling models for a strikingly wide range of programming languages, type theories and logics. A notable exception has been dependent type theory, which had so far defied a game theoretic description. We present a proposal to fill this gap in the form of a new categorical model of dependent type theory, based on a category of games and history-free winning strategies. We model dependent type theory with 1-, Sigma-, Pi- and intensional Id-types as well as finite inductive type families (which act as ground types, like calendars). We discuss the place of the Id-types in the intensionality spectrum as well as the strong completeness properties the model satisfies.
Most of the talk should be understandable without prior knowledge of game semantics and dependent type theory.
Ugo dal Lago (Universitá di Bologna)
Higher-Order Probabilistic Computation: Calculi, Observational Equivalence, and Implicit Complexity
Probabilistic models are more and more pervasive in computer science, and randomized algorithms are the ones offering the best performances in many domains. Higher-order probabilistic computation – in which a probabilistic function may be passed as a parameter and returned as a result – is on the other hand a relatively underdeveloped field, which is however receiving more and more attention. We give a survey of what is known about probabilistic lambda-calculi, later focusing on some of our recent results on implicit complexity and on inductive and coinductive techniques for program equivalence. Finally, we hint at how all this could be useful when structuring proofs of security for cryptographic primitives, but also when expressing probabilistic models in the context of machine learning.
Willem Heijltjes (University of Bath)
Complexity Bounds for Sum-Product Logic via Additive Proof Nets and Petri Nets
This is joint work with Dominic Hughes. We investigate efficient algorithms for the additive fragment of linear logic. This logic is an internal language for categories with finite sums and products, and describes concurrent two-player games of finite choice. In the context of session types, typing disciplines for communication along channels, the logic describes the communication of finite choice along a single channel.
We give a simple linear time correctness criterion for unit-free propositional additive proof nets via a natural construction on Petri nets. This is an essential ingredient to linear time complexity of the combinatorial proofs for classical logic by Dominic Hughes.
For full propositional additive linear logic, including the units, we give a proof search algorithm that is linear-time in the product of the source and target formula, and an algorithm for proof net correctness that is of the same time complexity. We prove that proof search in first-order additive linear logic is NP-complete.
Anupam Das (ENS Lyon)
A complete axiomatisation of MSOL on infinite trees.
We show that an adaptation of Peano's axioms for second-order arithmetic to the language of monadic second-order logic (MSOL) completely axiomatises the associated theory (SkS) over infinite trees. This continues a line of work begun by Büchi and Siefkes with axiomatisations of MSOL over various classes of linear orders. Our proof formalises, in the axiomatisation, a translation of MSO formulas to alternating parity tree automata. The main ingredient is the formalised proof of positional determinacy for the corresponding parity games which, as usual, allows us to complement automata and to deal with the negation of MSO formulas. The Comprehension Scheme of MSOL is used to obtain uniform winning strategies, whereas most usual proofs of positional determinacy rely on instances of the Axiom of Choice or transfinite induction. (Consequently we obtain an alternative decision procedure for MSOL over infinite trees, via proof search, that remains entirely internal to the language.)
This talk is based on joint work with Colin Riba that will be presented at LICS '15.
Georg Struth (University of Sheffield)
Completeness Theorems for Bi-Kleene Algebras and Series-Parallel Rational Pomset Languages
Pomsets form a standard model of true concurrency. In this lecture I present a completeness result for a class of pomset languages, which generalises the regular languages to the realm of concurrency. More precisely I show that the congruence on series-parallel rational pomset expressions generated by series-parallel rational pomset language identity is axiomatised by the axioms of Kleene algebra plus those of commutative Kleene algebra. A decision procedure is extracted from this proof. On the way to this result, series-parallel rational pomset languages are proved to be closed under the operations of co-Heyting algebras and homomorphisms. These results form a significant step towards a decision procedure for the equational theory of concurrent Kleene algebras, which have recently been proposed for concurrency verification (joint work with Michael Laurence).
James Brotherston (University College London)
Parametric completeness for separation theories (via hybrid logic).
In this talk, we consider the logical gap between the following two concepts:
(1) provability in a propositional axiomatisation of separation logic, which is usually given by the bunched logic BBI; and
(2) validity in an intended class of models of separation logic, as commonly considered in its program verification applications. Such intended classes are usually specified by a collection of algebraic axioms describing specific model properties, which we call a separation theory.
Here, we show first that several typical properties of separation theories are in fact not definable in BBI. Then, we show that these properties become definable in a natural hybrid extension of BBI, obtained by adding a theory of naming to BBI in the same way that hybrid logic extends normal modal logic. Finally, we show how to build an axiomatic proof system for our hybrid logic in such a way that adding any axiom of a certain form yields a sound and complete proof system with respect to the models satisfying those axioms. In particular, this yields sound and complete proof systems for any separation theory from our considered class (which, to the best of our knowledge, includes all those appearing in the literature).
This is joint work with Jules Villard, now at Facebook.
Guilhem Jaber (Queen Mary)
Reasoning on Equivalence of Stateful Programs with Operational Game Semantics
Contextual equivalence of programs written in a functional language with references (i.e. local mutable states) is a notoriously hard problem, specially with higher-order references (i.e. references which can store functions). In the last twenty years, different techniques have been introduce to that purpose: Kripke Logical Relations, Bisimulations and Algorithmic Game Semantics.
In this talk, we will see how to use operational game semantics, namely the trace semantics for a language with references introduced by Laird, to build a new technique, Kripke Open Bisimulations, to reason on equivalence of programs, taking the best of the previous methods. This technique is simple enough to be mostly automatized: it becomes possible to model-check equivalence of programs.
If time permits, we will see how to extend this technique to polymorphism.