Page tree
Skip to end of metadata
Go to start of metadata


Seminar series of the Mathematical Foundations group at the University of Bath.

Tuesdays 13:15 - 15:05 in room 6E 2.5 (unless otherwise specified) online via Microsoft Teams (https://teams.microsoft.com/l/team/19%3a7398647a739142b499551c59dece2ff6%40thread.tacv2/conversations?groupId=2779e154-04d8-464a-9035-03cdf22bd135&tenantId=377e3d22-4ea1-422d-b0ad-8fcc89406b9e)

Seminars are open to all. Email Chris Barrett in order to attend, as we'll need to add you as a `guest' to our Team.

Organisers: Willem Heijltjes and Giulio Guerrieri and Chris Barrett
Email us to suggest speakers






Upcoming seminars:

  • On May 3rd, Dag Normann (University of Oslo): Typed lambda calculus and operators of mainstream mathematics
  • On Tuesday May 10th, Chris Brown (USNA)
  • On Tuesday May 17th, Fabio Zanasi (UCL)
  • On Tuesday May 24th, Elena di Lavore: Dialectica Petri Nets



Past seminars:

5th April  (Online via MS Teams)

William Troiani (University of Melbourne)

The Gentzen-Mints-Zucker Duality, revisiting the Curry-Howard Correspondence

I will report on joint work with Daniel Murfet, in which we revisit Howard’s work on the Curry-Howard correspondence and interpret it as an isomorphism between a category of proofs in intuitionistic sequent calculus and a category of terms in simply-typed lambda calculus. Some version of this is known to the experts due to work of Mints, Zucker and others, but not for Gentzen’s original calculus. I will explain our technical contributions to the understanding of normal forms of sequent calculus proofs, and our view that the fundamental duality expressed is not between “proofs and programs” but between local and global presentations of a logico-computational structure.


22nd March (Online via MS Teams)

Robin Kaarsgaard (University of Edinburgh)

Quantum information effects

We study the two dual quantum information effects to manipulate the amount of information in quantum computation: hiding and allocation. The resulting type-and-effect system is fully expressive for irreversible quantum computing, including measurement. We provide universal categorical constructions that semantically interpret this arrow metalanguage with choice, starting with any rig groupoid interpreting the reversible base language. Several properties of quantum measurement follow in general, and we translate (noniterative) quantum flow charts into our language. The semantic constructions turn the category of unitaries between Hilbert spaces into the category of completely positive trace-preserving maps, and they turn the category of bijections between finite sets into the category of functions with chosen garbage. Thus they capture the fundamental theorems of classical and quantum reversible computing of Toffoli and Stinespring.


15th March (In-person)

Avgerinos Delkos (University of Birmingham)

Proof complexity of monotone branching programs

In this talk I will present our investigation of the proof complexity of systems based on positive branching programs, i.e. non-deterministic branching programs (NBPs) where, for any 0-transition between two nodes, there is also a  1-transition. Positive NBPs compute monotone Boolean functions, like negation-free circuits or formulas, but constitute a positive version of (non-uniform) NL, rather than Ptime or NC1, respectively.

The proof complexity of NBPs was investigated in previous work by Buss, Das and Knop, using extension variables to represent the dag-structure, over a language of (non-deterministic) decision trees, yielding the system eLNDT.
Our system poseLNDT is obtained by restricting their systems to a positive syntax, similarly to how the `monotone sequent calculus'  mLK is obtained from the usual sequent calculus LK by restricting to negation-free formulas.

The main result is that  poseLNDT polynomially simulates eLNDT over positive sequents. Our proof method is inspired by a similar result for mLK by Atserias, Galesi and Pudl'ak, that was recently improved to a bona fide polynomial simulation via works of Jevr'abek and Buss, Kabanets, Kolokolova and Kouck'y. Along the way we formalise several properties of counting functions within  poseLNDT by polynomial-size proofs and, as a case study, give explicit polynomial-size poofs of the propositional pigeonhole principle.



8th March  (Online via MS Teams)

Mario Román (Tallinn University of Technology)

Monoidal Streams for Dataflow Programming

In this work, we introduce monoidal streams: a generalization of causal, synchronous stream functions to monoidal categories. Monoidal streams form a feedback monoidal category that gives semantics to signal flow graphs. In the same way that streams provide semantics to dataflow programming with pure functions, monoidal streams provide semantics to dataflow programming with theories of processes represented by a symmetric monoidal category. I will present and employ some elementary results from coalgebra, coend calculus and feedback monoidal categories. We will see some examples from stochastic dataflow programming.

This talk is based on joint work with Elena Di Lavore and Giovanni de Felice. The preprint can be found at https://arxiv.org/abs/2202.02061.



1st March (Online via MS Teams)

Claudia Faggian (IRIF),

Bayesian Networks and Proof-Nets: a proof-theoretical account of Bayesian Inference

We uncover a strong correspondence between Bayesian Networks and (Multiplicative) Linear Logic Proof-Nets, relating the two as a representation of a joint probability distribution and at the level of computation, so yielding a proof-theoretical account of Bayesian Inference.

In particular, we relate the most widely used algorithm for exact inference (Message Passing over clique trees) with the inductive interpretation (in probabilistic coherence spaces) of a proof, opportunely factorized into a composition of smaller proofs. The correspondence turns out to be so tight, that even the computational cost of inference is similar.

This is joint work with Thomas Ehrhard, Robin Lemaire, Michele Pagani



2021

14th December (Online via MS Teams)

Pierre Clairambault (CNRS)

Learning to count with concurrent games with symmetry

In quantitative (eg probabilistic or quantum) extensions of concurrent games, each witness of a certain configuration is assigned a weight. There is neat link with weighted relational models consisting in, for each configuration of the game / point of the web, adding the weight of the corresponding witnesses. That seems simple enough.

But - as always - this pretty picture is disturbed by symmetry: each state of the strategy has countably many symmetric copies because of the countably many copy indices of Opponent moves. How do we know which states matter? Before linking concurrent games with weighted relational models, one first needs to learn how to count.

In this talk I will (aim to) show how to count in concurrent games, uncovering some interesting structure and disturbing pathological examples.


30th November (Online via MS Teams)

Marie Kerjean (LIPN)

∂ is for Dialectica

Dialectica is a proof transformation acting on intuitionistic logic
which factors through Girard's translation in Linear Logic. It allows
the realization of several semi-classical axioms such as Markov's
principle, and the Independence of premises. By taking the viewpoint of
differential lambda-calculus and recent developments in differentiable
programming, we will show how Dialectica computes higher-order backward
differentiation. We will illustrate this through the lens of categories,
lambda-calculus and logic. This is joint work with Pierre-Marie Pédrot.

23rd November (Online via MS Teams, at 3.15pm UK time)

Priyaa Srinivasan (University of Calgary)

Dagger linear logic for categorical quantum mechanics

Dagger monoidal and dagger compact closed categories are the standard settings for Categorical Quantum Mechanics (CQM). These settings of CQM are categorical proof theories of compact dagger linear logic and are motivated by the interpretation of quantum systems in the category of finite dimensional Hilbert spaces. In the first part of my talk, I will describe a new non-compact framework called Mixed Unitary Categories (MUCs) built on linearly distributive and *-autonomous categories which are categorical proof theories of (non-compact) multiplicative linear logic. The new framework is obtained by adding a dagger functor satisfying appropriate coherence conditions to an LDC, and by extracting a `unitary core' from an isomix dagger LDC. One of the motivations to develop a non-compact framework is to accommodate arbitrary dimensional systems in CQM in a structurally seamless manner. I will discuss a few examples of MUCs and describe some basic constructions.

The second part of my talk will explain the connection between exponential modalities of linear logic and complementary observables in quantum mechanics. Two quantum observables A and B are complementary if measuring one increases the uncertainty regarding the value of the other. 

I will explain the connection between exponential modalities of linear logic and complementarity observables in quantum mechanics by formulating quantum systems

within MUCs. I will show how the exponential modalities - ! and ? - of linear logic may be "compacted" into the usual notion of complementary observables (special commutative dagger Frobenius algebras interacting to produce two Hopf algebras) in a dagger monoidal category, thereby exhibiting a complementary system as arising via the compaction of distinct systems of arbitrary dimensions.


9th November (Online via MS Teams)

Amr Sabry (Indiana University)

Programming with Negative and Fractional Types

Computation is a physical process that obeys, among other laws, the law of conservation of information. Designing a first-order language
that embraces this principle is relatively simple only requiring the well-established isomorphisms between finite types. This first-order
language is reversible, has a simple operational semantics, and has a categorical semantics in bimonoidal categories.

Monoidal categories have higher-order extensions called compact closed categories whose main ingredient is the presence of dual objects. In
the setting of type theory, this requires a “negative type” that acts as a dual to the conventional sum type, and a “fractional type” that
acts as a dual to the conventional product type. The categorical semantics of these dual objects is well-established but what could
they correspond to in the world of operational semantics?

The main technical results are an interpretation of negative types using backtracking, an interpretation of fractional types using
garbage-collection, and an illustration of the expressive power of these type constructors for higher-order programming in a reversible
language.

The material is based on several publications culminating with a recent POPL 2021 paper with Ph.D. student Chao-Hong Chen.


2nd November (Online via MS Teams)

Sam Sanders (Ruhr-University Bochum)

On the logical and computational properties of the uncountable

We discuss recent results in higher-order computability theory and higher-order Reverse Mathematics 
due to Normann-Sanders. The general theme is that the third-order world looks very different 
from the second-order world, where the latter studies third-order objects via second-order representations. This is joint work with Dag Normann.


26th October (Online via MS Teams)

Anton Freund (TU Darmstadt)

Optimal proofs of Kruskal's theorem
Abstract: We recall Kruskal's tree theorem and two proofs with different
desirable properties: a proof via reifications, which is as elementary as
possible, in a precise logical sense; and Nash-Williams' proof via minimal
bad sequences, which is less elementary but extremely elegant. We then
introduce a uniform Kruskal theorem, which extends the original result
from trees to general recursive data types. To prove this uniform result,
the minimal bad sequence method is still sufficient but now also
necessary. In other words: the elegant and the most elementary proof
coincide, but only after uniformity has been added. The talk is based on
joint work with Rathjen and Weiermann (arXiv:2001.06380).


19th October (Online via MS Teams)

Thomas Erhard (IRIF)

Coherent Differentiation
The differential calculus uses addition in a crucial way: this appears clearly in the Leibniz Rule. This is why the differential lambda-calculus as well as differential linear logic feature an unrestricted addition operation on terms (or proofs) of the same type. From an operational point of view, this means that these formalisms feature finite non-determinism. So a natural question arises: is non-determinism inherent to any differential extension of the lambda-calculus and linear logic? Based on intuitions coming from probabilistic coherence spaces, we provide a negative answer to this question, introducing a setting of “summable categories” where the notion of summable pairs of morphisms is axiomatized by means of a functor equipped with three natural transformations satisfying a few axioms. Then we define a notion of differentiation in such a summable category equipped with a resource modality (exponential in the sense of linear logic): this differential structure is a distributive law between a monad associated with the summability structure and the resource comonad, satisfying a few additional axioms. I will present various aspects of this theory, which features similarities with tangent categories, as well as concrete instances.


20th July (Online via MS Teams)

Andrei Sipoș (University of Bucharest & IMAR)

A proof mining case study on the unit interval

In 1991, Borwein and Borwein proved [1] the following: if L>0, f : [0,1] → [0,1] is L-Lipschitz, (xn), (tn) ⊆ [0,1] are such that for all n, xn+1=(1-tn)xn+tnf(xn) and there is a δ>0 such that for all n, tn≤(2-δ)/(L+1), then the sequence (xn) converges.

The relevant fact here is that the main argument used in their proof is of a kind that hasn't been analyzed yet from the point of view of proof mining, and thus it may serve as an illustrative new case study. We shall present our work [2] on the proof, showing how to extract a uniform and computable rate of metastability for the above family of sequences.

References:
[1] D. Borwein, J. Borwein, Fixed point iterations for real functions. J. Math. Anal. Appl. 157, no. 1, 112–126, 1991.
[2] A. Sipoș, Rates of metastability for iterations on the unit interval.J. Math. Anal. Appl. 502, no. 1, 125235 [11 pages], 2021




29th June (Online via MS Teams)

Alex Rice (University of Cambridge)

 New minimal linear inferences in Boolean logic independent of switch and medial

A linear inference is a valid inequality of Boolean algebra in which each variable occurs at most once on each side. Equivalently, it is a linear rewrite rule on Boolean terms that constitutes a valid implication. Linear inferences have played a significant role in structural proof theory, in particular in models of substructural logics and in normalisation arguments for deep inference proof systems.

Systems of linear logic and, later, deep inference are founded upon two particular linear inferences, switch : x (y z) → (x y) z, and medial : (w x) (y z) → (w y) (x z). It is well-known that these two are not enough to derive all linear inferences (even modulo all valid linear equations), but beyond this little more is known about the structure of linear inferences in general. In particular despite recurring attention in the literature, the smallest linear inference not derivable under switch and medial (`switch-medial-independent’) was not previously known.

In this work we leverage recently developed graphical representations of linear formulae to build an implementation that is capable of more efficiently searching for switch-medial-independent inferences. We use it to find two `minimal’ 8-variable independent inferences and also prove that no smaller ones exist; in contrast, a previous approach based directly on formulae reached computational limits already at 7 variables. One of these new inferences derives some previously found independent linear inferences. The other exhibits structure seemingly beyond the scope of previous approaches we are aware of; in particular, its existence contradicts a conjecture of Das and Strassburger.

This talk is based on joint work with Anupam Das that will be published in the proceedings of FSCD 2021.


22nd June (Online via MS Teams)

Anupam Das (University of Birmingham)

A circular version of Gödel's T and its abstraction complexity

Circular and non-wellfounded proofs have become an increasingly popular tool for metalogical treatments of systems with forms of induction and/or recursion. In this work we investigate the expressivity of a variant CT of Gödel's system T where programs are circularly typed, rather than including an explicit recursion combinator. In particular, we examine the abstraction complexity (i.e. type level) of C, and show that the Gödel primitive recursive functionals may be typed more succinctly with circular derivations, using types precisely one level lower than in T. In fact we give a logical correspondence between the two settings, interpreting the quantifier-free type 1 theory of level n+1 T into that of level n C and vice-versa.

We also obtain some further results and perspectives on circular 'derivations', namely strong normalisation and confluence, models based on hereditary computable functionals, continuity at type 2, and a translation to terms of T computing the same functional, at all types.


8th June (Online via MS Teams)

Alex Gheorghiu (UCL)

Focused Proof-search in the Logic of Bunched Implications

The logic of Bunched Implications (BI) freely combines additive and multiplicative connectives, including two primitive intuitionistic implications, and is generally used as a natural reasoning system for systems with shared and separated resources. The logic admits all the familiar forms of proof system and semantics, though often with subtle caveats that renders the study both more difficult and more interesting. However, proof-search in BI has always been a particularly challenging problem; indeed, the logic is not currently known to be decidable or not.  Focusing is a restriction of the proof-search space that captures various goal-directed proof-search procedures, and is particularly significant as an enabling technology for more general approaches to logic programming. This talk concerns how focusing works in BI, and briefly why focusing is an important phenomenon, and how general notions of proof-search are increasingly important.

The study is proof theoretic: to show that focused proof-search is complete, the traditional sequent calculus for BI is reformulated using the simpler data-structure of nested sequents; then a polarised and focused variant of this calculus is introduced, which is shown to be sound and complete via a cut-elimination argument.



1st June (Online via MS Teams)

Nicolai Vorobjov (University of Bath)

Introduction to Pfaffian Functions

In recent years we can observe a considerable and growing interest from number theory and model theory in Pila-Wilkie counting theorem and its applications to diophantine geometry. Several applications of the counting theorem involve the geometry of elliptic curves and abelian varieties, the most famous example perhaps being the proof of the Manin–Mumford conjecture by Pila and Zannier. In these applications, one considers sets defined using elliptic and abelian functions. These functions are restricted sub-Pfaffian (by an observation of Macintyre), i.e., their graphs are defined by Pfaffian functions, introduced by Khovanskii in 1970-s. In this talk I will introduce the basics of Pfaffian functions, including their fundamental finiteness properties that are crucial for the aforementioned applications.

The exposition will be quite elementary with no prior knowledge in this domain required.



25th May (Online via MS Teams)

Paul-André Melliès (CNRS, University of Paris)

A gentle introduction to template games

Game semantics is the art of interpreting formulas (or types) as games and proofs (or programs) as strategies. In order to reflect the interactive behaviour of programs, strategies are required to follow specific scheduling policies. Typically, in the case of a sequential programming language, the program (Player) and its environment (Opponent) play one after the other, in a strictly alternating way. On the other hand, in the case of a concurrent language, Player and Opponent are allowed to play several moves in a row, in a non alternating way. In the two cases, the scheduling policy is designed very carefully in order to ensure that the strategies synchronise properly and compose well when plugged together. A longstanding conceptual problem has been to understand when and why a given scheduling policy works and is compositional in that sense. In this talk, I will introduce the notion of template game and exhibit a number of simple and fundamental combinatorial properties which ensure that a given scheduling policy defines (indeed) a monoidal closed bicategory of games, strategies and simulations. The notion of template game will be illustrated by constructing two game models of linear logic with different flavours (alternating and asynchronous) using the same categorical combinatorics, performed in the category of small 2-categories.



11th May (Online via MS Teams)

Paolo Pistone (University of Bologna)

On Counting Quantifiers and Randomized Lambda-Calculi

The interactions between computability theory and logic are plentiful and deep: one can mention Gödel's 1931 arithmetization of recursive functions, the Curry-Howard correspondence between typed lambda-calculi and proof systems, or Cook's characterization of NP through the satisfiability problem for propositional logic. However, there is one side of the theory of computation which was only marginally touched by this fruitful interaction, that is, randomized computation. While probabilistic models of computation have been widely investigated and are nowadays pervasive in many areas of computer science, similar correspondences with logic for such calculi are still missing. 

In this talk I will discuss some ongoing ideas, developed together with Melissa Antonelli and Ugo Dal Lago, to enrich the language of propositional logic and arithmetic with "counting" or "measure" quantifiers. These quantifiers allow one to express that a formula is true in a certain portion of all possible interpretations of the quantified variable (be it a propositional variable or an arithmetical predicate variable). After sketching how the resulting logical systems can be used to capture properties of probabilistic programs and to characterize counting complexity classes, I will discuss a type system with counting quantifiers for a probabilistic lambda-calculus recently studied by Dal Lago, Guerrieri and Heijltjes, and I will show how this system can be used to define a sort of Curry-Howard correspondence for probabilistic computation.



4th May (Online via MS Teams)

Alessio Santamaria (University of Pisa)

Combining Semilattices and Semimodules

Monads play a fundamental role in computer science as they embody different notions of computation. For example, the Powerset monad P captures non-determinism, while the free semimodule monad S, for a given semiring, allows us to model quantitative aspects in computation, by varying the semiring. In order to model systems involving non-determinism and quantitative features, such as non-deterministic weighted automata, a compositional approach calls for a means of composing the monads P and S. Under appropriate hypothesis on the semiring generating the monad S, there exists a unique weak distributive law SP -> PS, which I will describe, that has a very elegant form when the semiring is a positive semifield, involving the notion of convexity in semimodules. This weak distributive law induces a "composite monad" (whose underlying functor is not PS!) that, when appropriately restricted, is presented by the algebraic theory of simultaneous semilattices and semimodules where addition and scalar multiplication distribute over finite suprema.

These results are joint work with Filippo Bonchi and were recently presented at FoSSaCS; they can be found at https://doi.org/10.1007/978-3-030-71995-1_6.





27th April (Online via MS Teams)

Thomas Seiller (CNRS)

Linear realisability, dynamical systems and lower bounds in complexity

I will provide an overview of my recent (and not so recent) work studying connections between dynamical systems, logic and computation, with an emphasis on two recent results. The first exhibits a connection between linear negation and zeta functions of dynamical systems, something that can be exploited to build a model of second-order linear logic based on (discrete and continuous) Markov processes. The second unveils a connection between computational complexity and topological entropy, allowing to give an abstract account of several previous lower bounds results and strengthening one of the strongest complexity lower bounds previously known.



20th April (Online via MS Teams)

Gianluca Curzi (University of Birmingham),

Combining cyclic proofs and implicit complexity

Cyclic proofs (i.e. non-wellfounded but regular proofs) have received growing interest in the literature, and have been proposed as an alternative setting for studying (co)inductive reasoning. However, little is known about the complexity-theoretical aspects of cyclic proofs, which very often exhibit quite convoluted loop structures. This talk is an attempt to bridge the gap between implicit complexity and cyclic proofs. We study a cyclic proof system based on Hofmann’s SLR (Safe Linear Recursion), and we look for suitable proof-theoretical constraints able to reduce the complexity of loops and to capture some interesting complexity classes, such as ELEMENTARY and PTIME. This is joint work with Anupam Das.


13th April (online via MS Teams)

Samson Abramsky (University of Oxford)

Relating Structure to Power: from categorical semantics to descriptive complexity

There is a remarkable divide in the field of logic in Computer Science, between two distinct strands: one focussing on semantics and compositionality (“Structure”), the other on expressiveness and complexity (“Power”). It is remarkable because these two fundamental aspects are studied using almost disjoint technical languages and methods, by almost disjoint research communities. We believe that bridging this divide is a major issue in Computer Science, and may hold the key to fundamental advances in the field. In this talk, we describe a novel approach to relating categorical semantics, which exemplifies the first strand, to finite model theory, which exemplifies the second.




6th April  (online via MS Teams)

Christophe Lucas (University of Lyon),

Proof theory of modal Riesz spaces

We design a hypersequent calculus proof system for the theory of modal Riesz spaces and prove the key theorems: soundness, completeness and cut–elimination. These are then used to obtain completely syntactic proofs of some interesting results concerning the theory. Most notably, we prove a novel result: the theory of modal Riesz spaces is decidable. This work has applications in the field of logics of probabilistic programs since modal Riesz spaces provide the algebraic semantics of the Riesz modal logic underlying the probabilistic mu–calculus.




30th March (online via MS Teams)

Paulo Oliva (Queen Mary University)

The Selection Monad Transformer and Backward Induction with Mixed Strategies

The selection monad [1], T(X) = (X->R)->X, and its has corresponding “sequence” operation [T(X)] -> T([X]), where [X] denotes lists of type X, has been shown to generalise backward induction [2] — an algorithm for calculating sub-game perfect equilibria in sequential games. As it turns out, for any other monad M, T^M(X) = (X->MR)->MX is also a monad. We have used this fact in [3] to give a “Herbrand” functional interpretation for the axiom of countable choice. In work in progress we show how this monad transformer can be used to generalise backward induction even further, so that we are able to calculate optimal strategies even in the presence of non-deterministic or probabilistic adversaries. In this talk I’ll try to go over both the basic definitions and results, and the most recent progress in this area.
The Selection Monad Transformer and Backward Induction with Mixed Strategies


23rd March (online via MS Teams)

Alessio Guglielmi (University of Bath)

Towards Substitution via Linearity and Projection
 
Joint work with Chris Barrett and Victoria Barrett

For many years I have been looking for sound principles that would allow us to design a deep-inference formalism with higher-order quantification and substitution.  Since substitution subsumes quantification, I focus on the former. The main design hurdles are non-linearity and negation, especially when their combined action creates connected cycles inside proofs. However, both non-linearity and negation are necessary features of all interesting logics, so what can we do? I discuss two recent developments that followed from the study of subatomic logic.

The first result is a way to design linear, negation-free subatomic proof systems that have the same behaviour as non-linear ones with negation. We recover standard proofs from subatomic ones via interpretation maps that, crucially, preserve the proof-theoretic structure and its normalisation properties. Here I use the word 'linear' in a fanatical way: there are no units. The second result shows the viability of a notion of projection that is strongly associated with second-order substitution. I will show how to achieve a complete proof-theoretic treatment of a proof system by only resorting to projections. Those two results give me a starting point to deal with substitutions, and therefore quantification, in a specific way, which I will illustrate.


16th March (online via MS Teams)

Aleks Kissinger (University of Oxford)

MLL+MIX as a logic of influence and causation

The framework of higher-order causal categories gives a means of reasoning about influences between the inputs and outputs of networks of 'black box' processes using multiplicative linear logic with the MIX rule (MLL+MIX). An intriguing feature of this framework is it enables us to put not just processes, but also how the processes are wired together, i.e. their 'causal structure', in a black box. Hence, we can reason about situations where the causal structure is not fully known in advance, or is in fact genuinely indefinite (as in e.g. hypothesised 'quantum superpositions' of causal orderings). However, some care must be taken to compose processes consistently to avoid time-loops and ensuing logical paradoxes. From the construction of higher-order causal categories, it follows immediately that this logic of consistent composition includes MLL and the MIX rule, however the full extent of the logic is not yet known. In this talk, I will discuss the construction of higher-order causal categories, as well some potential avenues toward a complete linear-logical characterisation of the causal consistency conditions.


2nd March (online via MS Teams)

Anupam Das (University of Birmingham),

From linear inferences to graph rewriting and back: 10 years later

In this talk I will give an update on something that I spent a lot of time on as a PhD student in Bath: the theory of linear inferences, such as switch and medial. From the point of view of deep inference these rules are 'invisible' to normalisation procedures based on atomic flows, so extensions by linear inferences inherit a well-behaved proof theory (*). They also form precisely the multiplicative fragment of Blass' game semantics of linear logic (equivalently Japaridze's computability logic). 

The set of all linear inferences forms a rich structure that is coNP-complete so, a priori, embeds all of propositional logic. On the other hand, a result of mine and Lutz [Das & Strassburger '16] showed that they admit no feasible linear axiomatisation (unless coNP=NP), morally showing the necessity of structural proof theory. This also resolved an open problem of Blass and Japaridze and implies that there is no linear extension of SKS that admits removal of contraction loops as a flow rewrite rule (unless coNP=NP). 

Arguably due to this result, and a previous result on non-characterisability of {switch,medial} [Das '13], there has been a renewed focus on graph theoretic presentations of proofs alternative to atomic flows. Settings such as combinatorial proofs/flows recover elegantly marry feasible sequentialisation with compositionality via normalisation, but unfortunately at the cost of sacrificing (*), which I would argue is one of the defining advantages of deep inference proof theory. More relevant to this line of work, there has recently been two proposals, [Acclavio, Horne & Strassburger '20] and [Calk, Das & Waring '20], for building logic/proof theory over a language of graphs rather than formulas. The former strictly generalises the latter from P4-free graphs (as relation webs) to arbitrary graphs, and this presentation has played a crucial role in several of the results in the area, not least the result of [Das & Strassburger '16]. 

In this talk (after surveying the above three paragraphs) I will (try to find time to) speak at a high level about the approaches of [Acclavio, Horne & Strassburger '20], based in linear logic, and [Calk, Das & Waring '20], based in classical logic, and give an idea of the subtleties behind them. In particular, we know that the two logics are incomparable (unlike the formula setting) and, as of now, we do not really know why. I will conclude the talk with some of the unresolved questions that I find interesting.


23rd February (online via MS Teams)

Ugo Dal Lago (University of Bologna),

Intersection Types and (Positive) Almost-Sure Termination

Randomized higher-order computation can be seen as being captured by a lambda-calculus endowed with a single algebraic operation, namely a construct for binary probabilistic choice. What matters about such computations is the *probability* of obtaining any given result, rather than the *possibility* or *necessity* of obtaining it, like in (non)deterministic computation. Termination, arguably the simplest kind of reachability problem, can be spelled out in at least two ways, depending on whether it talks about the probability of convergence or about the expected evaluation time, the second one providing a stronger guarantee. In this paper, we show that intersection types are capable of precisely characterizing both notions of termination inside a single system of types: the probability of convergence of any λ-term can be underapproximated by its type, while the underlying derivation’s weight gives a lower bound to the term’s expected number of steps to normal form. Noticeably, both approximations are tight—not only soundness but also completeness holds. The crucial ingredient is non-idempotency, without which it would be impossible to reason on the expected number of reduction steps which are necessary to completely evaluate any term. Besides, the kind of approximation we obtain is proved to be optimal recursion theoretically: no recursively enumerable formal system can do better than that.


16th February (online via MS Teams)

Jim Laird (University of Bath)

A compositional cost model for the lambda-calculus

We describe a (time) cost model for the (call-by-value) lambda-calculus based on a natural (“big-step”) presentation of its game semantics: the cost of computing a finite approximant to the denotation of a term (its evaluation tree) is the size of its smallest derivation in the semantics. 

This measure has an optimality property enabling compositional reasoning about cost bounds: for any term A, context C[ ] and approximants a and b to the trees of A and C[A], the cost of computing b from C[A] is no more than the cost of computing a from A and b from C[a].

Although the natural semantics on which it is based is nondeterministic, our cost model is reasonable: we describe a deterministic algorithm for recognizing evaluation tree approximants which satisfies it (up to a constant factor overhead) on a Random Access Machine. This requires an implementation of the call-by-value lambda-calculus on the RAM which is completely lazy: compositionality of costs entails that work done to evaluate any part of a term cannot be duplicated. This is achieved by a novel implementation of graph reduction for nameless explicit substitutions, to which we compile the lambda-calculus via a series of linear cost reductions.


9th February (online via MS Teams)

Noam Zeilberger (Ecole Polytechnique),

Skew monoidal categories and the proof-theoretic anatomy of associativity

Based on joint work with Tarmo Uustalu and Niccolò Veltri.

The talk will survey a recent line of work, which takes a proof-theoretic approach to solving the coherence problem(s) for skew monoidal categories and related structures.  I will begin by discussing the so-called Tamari order on fully-bracketed words induced by a semi-associative law (AB)C <= A(BC), and explain how a simple sequent calculus may account for some of its fascinating properties, such as the fact that the set of fully-bracketed words on n+1 letters forms a lattice Y_n under this order, as well as a remarkable formula counting the number of intervals in Y_n. Then I will recall the definition of skew monoidal categories, and explain how a more refined sequent calculus may be used to solve two related coherence problems: deciding equality of maps and enumerating homsets in free skew monoidal categories.  Finally, I will briefly discuss variations of the sequent calculus capturing "partially skew" monoidal categories with different normality conditions.

References:
[1] https://arxiv.org/abs/1803.10080
[2] https://arxiv.org/abs/2003.05213
[3] https://arxiv.org/abs/2101.10487


26th January (online via MS Teams)

Marianna Girlando (Inria Saclay, France)

Automated tools for intuitionistic modal logics

In this talk I will present MOIN, a simple Prolog theorem prover implementing two nested proof systems for intuitionistic modal logics: single-conclusion and multi-conclusion nested sequents.
The single-conclusion systems have an internal cut-elimination, while the multi-conclusion systems can provide a countermodel in case of proof search failure.

After presenting the logics in the intuitionistic modal cube, I will introduce the proof systems and explain how MOIN implements the termination strategies and the countermodel construction of the calculi for the logics whose decidability is known - that is, all systems in the cube except IK4, ID4 and IS4.   

This is joint work with Lutz Strassburger, accepted at AiML 2020 
The prover is available here: http://www.lix.polytechnique.fr/Labo/Lutz.Strassburger/Software/Moin/MoinProver.html


12th January (online via MS Teams)

Jorge Perez (University of Groningen)

Linearity, Control Effects, and Behavioral Types

Mainstream programming idioms intensively rely on state mutation, sharing, and concurrency. Designing type systems for handling and disciplining such idioms is challenging, due to long known conflicts between internal non-determinism, linearity, and control effects (such as exceptions).

In the first part of the talk, I will present a type system that accommodates non-deterministic and abortable behaviors in the setting of concurrent programs with session types. Our type system builds on a Curry-Howard correspondence with (classical) linear logic conservatively extended with two dual modalities; it provides a first example of a Curry-Howard interpretation of a realistic concurrent programming language with built-in internal non-determinism.

To illustrate the expressivity of this type system, in the second part of the talk I will discuss how to represent a non-deterministic lambda-calculus with resources and intersection types into our session-typed concurrent setting with non-deterministic and abortable behaviors.

Note: the first part of the talk is based on joint work with Luis Caires (presented at ESOP'17); the second part is based on work with Joseph Paulus and Daniele Nantes.


2020


8th December (online via MS Teams)

Alex Kavvos (University of Bristol)

Client-Server Sessions in Linear Logic

Abstract: We introduce coexponentials, a new set of  modalities for Classical Linear Logic. As duals to exponentials, the coexponentials codify a distributed form of the structural rules of weakening and contraction.  This makes them a  suitable logical device for encapsulating the pattern of a server receiving requests from an arbitrary number of clients on a single channel. This avoids the introduction of rules like MIX. Guided by this intuition we formulate a system of session types based on Classical Linear Logic with coexponentials, which is well-suited to modelling client-server interactions.


1st December (online via MS Teams)

Delia Kesner (Paris Diderot University)

The Spirit of Node Replication

Abstract: We define and investigate a term calculus implementing higher-order node replication. We use the new calculus to specify two different (weak) evaluation strategies: call-by-name and fully lazy call-by-need. We show that they are observationally equivalent using type theoretical technical tools.


23rd November (online via MS Teams)

Philip Saville (University of Edinburgh)

Type theory for cartesian closed bicategories [joint work with Marcelo Fiore]

Cartesian closed bicategories appear in logic, categorical algebra, and semantics. Roughly speaking, a cartesian closed bicategory is a version of a cartesian closed category in which the usual beta-eta laws are replaced by coherent isomorphisms.

In this talk I will introduce an internal language for cartesian closed bicategories, and show how the syntax can be extracted in a principled fashion from universal-algebraic considerations. The resulting language consists of types, terms, and rewrites between terms, together with an explicit substitution operation. By examining the language more closely, we shall see that it really is --- in a precise sense --- the simply-typed lambda calculus `up to isomorphism', and that its rewrites describe beta-eta reductions. No knowledge of bicategories will be assumed.


10th November (online via MS Teams)

Pawel Sobocinski (Tallinn University of Technology)

Games on graphs: a compositional approach

Matrices, and more generally linear algebra, play an important role in graph theory. Superficially, adjacency matrices can encode connectivity in a graph, but this encoding is often used as a bridge, inviting the transfer of concepts and techniques from linear algebra. Categorically speaking, the symmetric monoidal category of matrices has an elegant presentation as the prop of bimonoids.

With Chantawibul, we explored the addition of "caps" and the corresponding natural equations, which turn out to capture the concept of adjacency matrix. Adding generators for vertices, we obtain a symmetric monoidal category of "open" graphs. In recent work with Di Lavore and Hedges, we extended this to capture multigraphs, and used the resulting category as a "syntax" with which one can define a natural class of games as monoidal functors from open graphs to open games. In my talk, I will concentrate on explaining the basic components of the category of open graphs, and sketch the results of the two papers below.

A. Chantawibul and P. Sobociński, "Towards Compositional Graph Theory", In MFPS XXXI, 2015
E. Di Lavore, J. Hedges and P. Sobocinski, "Compositional modelling of network games", Accepted for CSL 2021. Earlier version available on the arXiv (https://arxiv.org/pdf/2006.03493.pdf)


3rd November (online via MS Teams)

Wen Kokke (University of Edinburgh)
Towards Races in Classical Linear Logic

Process calculi based in logic, such as πDILL and CP, provide a foundation for deadlock-free concurrent programming, but exclude non-determinism and races. HCP is a reformulation of CP which addresses a fundamental shortcoming: the fundamental operator for parallel composition from the π-calculus does not correspond to any rule of linear logic, and therefore not to any term construct in CP. We introduce HCPND, which extends HCP with a novel account of non-determinism. Our approach draws on bounded linear logic to provide a strongly-typed account of standard process calculus expressions of non-determinism. We show that our extension is expressive enough to capture many uses of non-determinism in untyped calculi, such as non-deterministic choice, while preserving HCP's meta-theoretic properties, including deadlock freedom.



27th October (online via MS Teams)

Mario Alvarez-Picallo (University of Oxford)
The Difference Lambda-Calculus

Cartesian difference categories are a recent generalisation of Cartesian differential categories which introduce a notion of “infinitesimal” arrows satisfying an analogue of the Kock-Lawvere axiom, with the axioms of a Cartesian differential category being satisfied only “up to an infinitesimal perturbation”. We present here a simply-typed calculus in the spirit of the differential λ-calculus equipped with syntactic “infinitesimals”, which provides us with a syntactic account of difference categories as well as, surprisingly, a minimalistic model for forward-mode Automatic Differentiation.



6th October (online via MS Teams)

Willem Heijltjes (University of Bath)
Decomposing lambda-calculi with effects

When computational effects are introduced into the lambda-calculus, typically, confluence is lost. Its semantics then becomes dependent on a chosen reduction strategy, and since different reductions yield different effectful behaviour, it becomes necessary to provide means of controlling evaluation within the syntax. The effect is to introduce a strong operational component, to the detriment of one of the main strengths of the (typed) lambda-calculus, which is that is naturally amenable to denotational reasoning.

In this talk I will propose a new model of effectful computation: the sequential poly-lambda-calculus. Rather than extending the lambda-calculus with primitives, it generalizes the existing constructors so that computational effects as well as the control of evaluation can be decomposed into it. The calculus is confluent, and can be simply typed.

The result is a radically simple model of effectful computation. The calculus consists of four constructors and a single reduction rule (the generalization of beta-reduction), yet it describes mutable higher-order store, input and output, random variables, and sequencing of commands. It encodes both call-name and call-by-value, as well as previous approaches to do so, such as Moggi's Computational Metalanguage, Levy's call-by-push-value, and Ehrhard and Guerrieri's Bang Calculus. The typed version constitutes a new approach to the problem of typing higher-order store.



28th July (online via MS Teams)

Francesco Gavazzo (Università di Bologna, Italy)
An Operational Analysis of Algebraic Effects

In this talk, we study the operational theory of a lambda-calculus with algebraic effects in the spirit of Plotkin and Power. The theory is built over a notion of effectful monadic reduction which extends the standard notion of beta-reduction to monadic elements.

We show that for any algebraic effect the notion of reduction thus obtained enjoys a form of finitary standardization, and that if effects are commutative, then we also obtain confluence. In presence of arbitrary (algebraic) effects, however, both finitary and infinitary reductions exhibit interesting computational contents. For this reason, we study asymptotic properties of effectful computations and show that even if the reduction is quite liberal, the asymptotic behavior of a term has good computational properties.

This is joint work with Claudia Faggian.



23rd June (online via MS Teams)

Francesco Genco (Université Paris 1, France)
A Parallel Computational Intepretation of Multiplicative Linear Logic

We present a computational interpretation of multiplicative linear logic in terms of parallel functional programming. By employing ideas from the deduction system on which λμ is based, we define the multiple-conclusion natural deduction system NMLL for multiplicative linear logic and we show that it admits a computational interpretation in terms of communicating parallel processes. The resulting calculus, λ-par, is a simple extension of λ-calculus with parallelism and communication primitives. After presenting λ-par, we show some programs which exemplify its expressive power and we discuss its properties, such as progress, strong normalisation and confluence.

This is joint work with Federico Aschieri, accepted at POPL 2020.

slides



16th June (online via MS Teams)

Gabriele Vanoni (Università di Bologna, Italy)
The Abstract Machinery of Interaction

Geometry of Interaction (GOI) is a semantic framework describing the dynamics of cut-elimination in the Multiplicative Exponential fragment of Linear Logic (MELL). It has been formulated in many ways, from operator algebras to traced symmetrical monoidal categories. Remarkably, some of these formulations, in particular Context Semantics by Gonthier et al. led to the introduction of compilation techniques (Mackie '95) and abstract machines for higher-order functional languages (Danos and Regnier '99). However, these machines were always based on a proof-net representation of programs, via the so-called Girard translation. We introduce an abstract machine in the spirit of GOI based directly on the λ-calculus, without any explicit use of proof-nets. We prove soundness and adequacy, and we derive in our framework an optimization already presented by Danos and Regnier. We complete our study with a complexity analysis of those machines.

This is joint work with Beniamino Accattoli and Ugo Dal Lago.

slides



9th June (online via MS Teams)

Sonia Marin (University College London)
A Characterisation of Forwarders as Linear Logic Proofs

Forwarders are simple processes, within distributed systems of connected peers, that simply re-send messages. We give a logical account for forwarding, following a standard proofs-as-processes approach. The main contributions of this paper are two proof systems that characterise (i) non-order preserving asynchronous and (ii) synchronous forwarders, respectively. We relate synchronous forwarders to coherence logic, that captures multiparty compatibility in multiparty session types, and we show that both systems satisfy deadlock-freedom and compositionality, a property so far not known to hold for forwarders or other coherence systems.

This is work-in-progress with Marco Carbone and Carsten Schürmann.


26th May (online via MS Teams)

Ross Horne (University of Luxembourg)
Logic Beyond Formulae: Generalising the Calculus of Structures for Logic on Graphs

Every formula of propositional logic can be written as a graph with nodes labelled with atomic propositions, where edges joining two nodes indicate the operator joining the sub-formulae in which the respective atoms appear. Those edges can model dependencies in logical models of processes. However, restricting to graphs that formulas generate (co-graphs, series-parallel graphs, etc.) is an arbitrary restriction from the perspective of modelling processes, since ubiquitous structures such as producer-consumer queues form structures that do not correspond to formulae in this way.

The additional expressive power offered by graphs motivates us in this fundamental study, where we design a proof system where inference rules are applied directly to graphs rather than formulae. In designing the proof system, we assume only that the logic has an involutive negation (which can only be defined by graph complement), and implication is defined as "not P, or Q", where "or" is the disjoint union of graphs. The inference rules of the proof system then follow from fundamental principles of logic, most of which are established via cut elimination, and an assumption that we aim for the minimal and hence most general system conforming to such principles. Note, applying the same principles to a logic based on formulae yields multiplicative linear logic (MLL); and indeed we obtain a conservative extension of MLL to graphs. Beyond discovering a new system, insight for logic is that (1) deep inference is necessary even for this most basic generalisation of logic to graphs; (2) we were forced to generalise the tools of deep inference itself, deepening techniques such as splitting, context reduction, and rule elimination. Thereby, we discover a proper generalisation of the calculus of structures, which already generalises the sequent calculus, which is fundamental to structural proof theory.

This is joint work with Matteo Acclavio and Lutz Stassburger, to appear in LICS'20.



19th May (online via MS Teams)

Victoria Barrett (University of Bath)
Towards a normalisation methodology and a topological representation of proofs

In this talk I will discuss the progress made so far in developing a normalisation methodology and a representation of proofs. My project will build on Gundersen's atomic flows and Aler Tubella’s subatomic logic. Subatomic logic treats atoms as connectives and obtains a linear rule shape for a wide variety of proof systems. By generalising the rule shape further we will be able to capture modality and quantification in the open deduction formalism and to factor out associativity to obtain good proof semantics. A good representation will provide induction measures for the normalisation of proofs, and provide proof invariants.

I have developed a preliminary proof representation, which tracks the connectives in a subatomic derivation. I will use this to show how a generalisation of the cut-contraction rewrite rule from atomic flows can be applied more widely than initially expected, and I will show how the binary subatomic rule shape could be generalised. In the sense that subatomic logic gives us a universal rule shape which can be applied to a wide range of logics, we hope to find universal rewrite rules which will unify the normalisation of a range of logics.



12th May (online via MS Teams)

Thomas Powell (University of Bath)
Quantitative Tauberian theorems

In this talk I seek to achieve two things. First, I aim to give a very concise introduction to modern applied proof theory, without assuming any prior knowledge of this area. Second, I will present some new reasearch of mine in Tauberian theory, ncuding a proof theoretic study of the celebrated Hardy-Littlewood Tauberian theorem.

slides



5th May (online via MS Teams)

Davide Barbarossa (Université Sorbonne Paris Nord, France and Università Roma Tre, Italy)
Approximating functional programs: Taylor subsumes Scott, Berry, Kahn and Plotkin.

In their fundamental paper developing the differential λ-calculus in 2003, T. Ehrhard and L. Régnier write: "Understanding the relation between the term and its full Taylor expansion might be the starting point of a renewing of the theory of approximations".

This speculative ambition is nowadays at hand: using the resource sensitive theory differential λ-calculus comes with, we provide "simpler" proofs of crucial results in λ-calculus - that are otherwise obtained via Scott’s continuity, Berry’s stability or Kahn and Plotkin’s sequentiality theory.

This is joint work with Giulio Manzonetto, accepted at POPL 2020.

slides



28th April (online via MS Teams)

Gabriel Scherer (INRIA Saclay, France)
Focusing on lambda-calculus equivalence

In this overview talk we will show how the logical technique of focusing can be applied to better understand program equivalence in the simply-typed lambda-calculus with datatypes (in particular sums and the empty type).

The talk will not assume familiarity with focusing, and introduce it in the context of propositional intuitionistic logic. We will then present a focused presentation of the (normal forms of the) lambda-calculus. If time allow, we will then introduce the "saturated" normal forms, a stronger restriction that we used in a paper POPL 2017 (available on https://arxiv.org/abs/1610.01213) to decide equivalence of simply-typed lambda-terms with sums and the empty type.

slides



21st April (at 14:00, online via MS Teams)

Damiano Mazza (CNRS-Université Paris 13, France)
Automatic Differentiation in PCF

Automatic differentiation (AD) is the science of efficiently computing the derivative (or gradient, or Jacobian) of functions specified bycomputer programs.  It is a fundamental tool in several fields, most notably machine learning, where it is the key for training deep neural networks.  Albeit AD techniques traditionally focus on a restricted class of programs, namely first-order straight-line programs, the rise of so-called differentiable programming in recent years has called for the need of applying AD to complex code, containing all sorts of control flow operators and higher-order combinators. In this talk, I will discuss the extension of AD algorithms to PCF, a(n idealized) purely functional programming language.  We will first consider the simply-typed lambda-calculus, showing in particular how linear negation is related to reverse mode AD (aka backpropagation), and then see how the extra features of PCF, namely full recursion and conditionals, may be dealt with, stressing the difficulties posed by the latter.

Joint work with Aloïs Brunel (Deepomatic) and Michele Pagani (IRIF, Université de Paris) accepted at POPL 2020.

slides



14th April (online via MS Teams)

Pierre Clairambault (CNRS-ENS Lyon, France)
Fix Your Semantic Cube Using This Simple Trick

Game semantics usually present execution as a total chronological ordering of observable computational events. They proved to be a powerful tool in semantics, yielding a great deal of full abstraction results. Its first few achievements were striking, capturing in a single framework combinations of state and control. This lead to Abramsky's famous "semantic cube" programme: to capture in a single canvas a wide variety of effects. However, past the original "semantic square" resulting from the combinations of control and state, developments in game semantics have since then been rather disconnected.

The family of game semantics usually known as "concurrent games", originally introduced by Abramsky and Melliès in 1999, has been actively developed in the past decade or so. In contrast with the total chronological ordering of traditional games, concurrent games present (implicitly or explicitly) the causal constraints between computational events. To that end they borrow tools from concurrency theory, and in particular rely on structures coming from "true concurrency". They are therefore well suited to the denotational semantics of concurrent programming languages.

In this talk, I will give an introduction to concurrent games and present some developments that highlight a different property: they are good at linking distinct models with each other, in and outside of game semantics. Concretely, I will present the first steps to fixing the cube {state, parallelism, probabilities} -- all its faces are currently broken. First, I will fix the face {state, parallelism}. For that I will introduce a notion of parallel innocence, exploiting the causal aspect of concurrent games. Time permitting, I will also fix {state, probabilities}, via probabilistic concurrent games and their link with weighted relational semantics. I will close on some perspectives on the full cube.

This talk builds on joint work with several people, among whom Simon Castellan, Hugo Paquet and Glynn Winskel.

slides



31st March (online via MS Teams)

Willem Heijltjes (University of Bath)
Decomposing Probabilistic Lambda-Calculi

A notion of probabilistic lambda-calculus usually comes with a prescribed reduction strategy, typically call-by-name or call-by-value, as the calculus is non-confluent and these strategies yield different results. This is a break with one of the main advantages of lambda-calculus: confluence, which means results are independent from the choice of strategy.

We present a probabilistic lambda-calculus where the probabilistic operator is decomposed into two syntactic constructs: a generator, which represents a probabilistic event; and a consumer, which acts on the term depending on a given event. The resulting calculus, the Probabilistic Event Lambda-Calculus, is confluent, and interprets the call-by-name and call-by-value strategies through different interpretations of the probabilistic operator into our generator and consumer constructs.

We present two notions of reduction, one via fine-grained local rewrite steps, and one by generation and consumption of probabilistic events. Simple types for the calculus are essentially standard, and they convey strong normalization. We demonstrate how we can encode call-by-name and call-by-value probabilistic evaluation.

This is joint work with Ugo dal Lago and Giulio Guerrieri, accepted at FoSSaCS 2020.



18th February

Anupam Das (University of Birmingham)
Some recent developments in non-wellfounded proof theory

Non-wellfounded proof theory has become a popular alternative to systems with induction or with infinitely branching rules. The non-wellfounded approach to infinitary proof theory retains finitary branching at the cost of wellfoundedness, naturally requiring some global correctness criterion to prevent fallacious reasoning. Both wellfounded and non-wellfounded infinitary proof theory have the advantage of being amenable to metalogical arguments for proof analysis (e.g. cut-elimination), but the non-wellfounded approach exhibits a further advantage: there is a natural class of finite proofs, namely those whose dependency graphs are finite, known as 'circular' or 'cyclic' proofs.

Non-wellfounded proofs were considered early on in the history of logic, due to theirassociation with non-standard models of arithemetic (indeed, this was one of the moti-vations behind Mints’ continuous cut-elimination). Cyclic proofs themselves seem to first appear in the '90s and ’00s in the study of modal fixed point logics, where regular tableaux were obtained in a general way thanks to the celebrated Niwinski-Walukiewicz games. In the '00s Brotherston and Simpson, inspired by that approach, constructed non-wellfounded systems of predicate logic extended by forms of inductive definitions. More recently the French school of proof theory has pioneered the study of cyclic type systems, motivated by the Curry-Howard correspondence and admitting a more structural proof-theoretic treatment.

This talk aims to give an overarching (and personal) view of the world of circular proof theory today. In particular, I will try to promote a classical viewpoint of bridging the somewhat disconnected worlds therein by means of correspondences that, in a formal sense, preserve computational content. To this end I will concentrate, as a case study, on the bridge between first-order arithmetic and simply typed recursive programs induced by Goedel’s functional interpretation.



28th January (room 4E 2.4)

James Laird (University of Bath)
A Curry-style Semantics of Interaction: From untyped to second-order lazy lambda-mu calculus

We propose a "Curry-style" semantics of programs in which a nominal labelled transition system of types, characterizing observable behaviour, is overlaid on a nominal LTS of untyped computation. This leads to a notion of program equivalence as typed bisimulation.

Our semantics reflects the role of types as hiding operators, firstly via an axiomatic characterization of "parallel composition with hiding" which yields a general technique for establishing congruence results for typed bisimulation, and secondly via an example which captures the hiding of implementations in abstract data types: a typed bisimulation for the lazy lambda-mu calculus with (Curry-style) polymorphic typing. This is built on an abstract machine for CPS evaluation of -terms: we first give a basic typing system for this LTS which characterizes acyclicity of the environment and local control flow, and then refine this to a polymorphic typing system which uses equational constraints on instantiated type variables, inferred from observable interaction, to capture behaviour at polymorphic and abstract types.

In this informal talk, I will introduce the concept of explainability for Artificial Intelligence and the needs of the field with respect to proof theory. I will discuss some ideas on how deep inference research could help in the development of languages to encode explanations through the design of systems with good complexity/analiticity/proof search properties that support connectives that can express concepts like sequentiality, probabilities or causality in a modular way. Thoughts and discussion about possible research directions are highly encouraged.



2019


11th December (Wednesday, at 13:15 in room 3E 2.4)

Andrea Aler Tubella (Umea University, Sweden)
Deep Inference for Machine Learning explainability?

In this informal talk, I will introduce the concept of explainability for Artificial Intelligence and the needs of the field with respect to proof theory. I will discuss some ideas on how deep inference research could help in the development of languages to encode explanations through the design of systems with good complexity/analiticity/proof search properties that support connectives that can express concepts like sequentiality, probabilities or causality in a modular way. Thoughts and discussion about possible research directions are highly encouraged.

slides



19th November

Lutz Strassburger (Inria, France)
Focused and Synthetic Nested Sequents
(joint work with Kaustuv Chaudhuri and Sonia Marin)

Focusing is a general technique for transforming a sequent proof system into one with a syntactic separation of non-deterministic choices without sacrificing completeness. This not only improves proof search, but also has the representational benefit of distilling sequent proofs into
synthetic normal forms. We show how to apply the focusing technique to nested sequent calculi, a generalization of ordinary sequent calculi to tree-like instead of list-like structures. We thus improve the reach of focusing to the most commonly studied modal logics, the logics of the
modal S5 cube. Among our key contributions is a focused cut-elimination theorem for focused nested sequents.


Benjamin Ralph (Inria, France)
Decomposing Proofs with Merge Contractions

Recent work by Alessio Guglielmi and Andrea Aler Tubella has provided a unified theory of normalisation for a wide variety of logics, using a subatomic variant of the deep inference proof methodology.  This involves distinguishing "cut-like" rules and "contraction-like" rules.  The normalisation procedure for the former is not dissimilar to the notion of cut elimination that is foundational to proof theory. However normalising the latter, by decomposing the proof into a contractive and non-contractive phase, is less familiar, as, for many logics, the normal forms obtained are not expressible in the sequent calculus.

In this talk, I will focus on decomposition for classical propositional logic.  In the absence of a certain type of cycles in the proof, this is straightforward, one can simply "push" the contractions through the proof until they are distilled from the other inference rules. Yet, if these cycles are present, one must either remove the cycles first, or take a different approach. In a previous paper, we explored the first option.  I will describe work taking the other approach, defining a new generalised form of contraction, the merge contraction and developing its rewriting theory to avert the problem of cycles. Despite originating as a solution to a particular problem, I will also argue that these merge contractions are an interesting object of study in their own right, with a natural coinductive definition and many interesting properties.

slides


14th November (Thursday, 13:45-15:05, room 6E 2.2)

Nobuko Yoshida and Simon Castellan (Imperial London College, UK)
Two Sides of the Same Coin: Session Types and Game Semantics

Game semantics and session types are two formalisations of the same concept: message-passing open programs following certain protocols. Game semantics represents protocols as games, and programs as strategies; while session types specify protocols, and well-typed π-calculus processes model programs. Giving faithful models of the π-calculus and giving a precise description of strategies as a programming language are two difficult problems. In this talk, we show how these two problems can be tackled at the same time by building an accurate game semantics model of the session π-calculus.


5th November

Federico Olimpieri (Aix-Marseille Université, France)
Intersection types as generalized species: a bicategorical model for pure lambda-calculus

The bicategory of Generalised species is a generalisation of both the category of sets and relations and Joyal's Species of structures. It can be described as the Kleisly category of a suitable pseudo-comonad on distributors. In a recent paper, Tsukada, Asada and Ong introduced a denotational semantics via Generalised species with a type-theoretic flavour. Their model is a refinement of the classic Gardner-De Carvalho's commutative non-idempotent intersection.    We further generalise their work in two ways. Firstly, we present a model for pure lambda-calculus, for the algebraic lambda-calculus and for the bang-calculus. Secondly, we show how it is possible to present models that gives rise to more informative intersection type systems (e.g. systems that admits a form of subtyping).

slides


29th October

Matteo Acclavio (Università Roma Tre, Italy)
Combinatorial proofs for modal logic 
(joint work with Lutz Strassburger)

Combinatorial proofs are a sound and complete system for classical logic, which allow to capture the notion of proof equivalence. Combinatorial proofs consists of morphism from a graph equipped with a perfect matching to a graph representing a formula. In this talk I will present the graphical syntax of combinatorial proofs and show how the formalism can be extended to obtain a sound and complete proof system for the modal logics of the S4 plane.

This work appeared at Tableaux 2019 (link to the paper).

slides


Roberto Maieli (Università Roma Tre, Italy)
The structure of Multiplicatives, 30 years later (joint work with Matteo Acclavio)

We investigate the notion of generalized connectives for multiplicative linear logic. To define such new connectives, we introduce a notion of orthogonality for partitions of a finite set and we study the relation between orthogonality and dual connectives. For such connectives, we give a notion of decomposability by means of conjunction and disjunction. Finally, we show the existence of a family of non-decomposable connectives which have a simple interpretation as superposition of decomposable connectives.

This work will appear at CSL 2020.

slides


8th October

Giulio Guerrieri (University of Bath)
Factorization and Normalization, Essentially
(joint work with Beniamino Accattoli and Claudia Faggian)

Lambda-calculi come with no fixed evaluation strategy. Different strategies may then be considered, and it is important that they satisfy some abstract rewriting property, such as factorization or normalization theorems. In this paper we provide simple proof techniques for these theorems. Our starting point is a revisitation of Takahashi's technique to prove factorization for head reduction. Our technique is both simpler and more powerful, as it works in cases where Takahashi's does not. We then pair factorization with two other abstract properties, defining essential systems, and show that normalization follows. Concretely, we apply the technique to four case studies, two classic ones, head and the leftmost-outermost reductions, and two less classic ones, non-deterministic weak call-by-value and least-level reductions.

This work will appear at APLAS 2019.

slides


30th July (room 1W 2.03)

Ugo Dal Lago (Università di Bologna, Italy)
Differential Logical Relations 
(joint work with Francesco Gavazzo and Akira Yoshimizu)

We introduce a new form of logical relation which, in the spirit of metric relations, assigns each pair of programs a quantity measuring their distance, rather than a boolean value standing for their being equivalent (or not). The novelty of differential logical relations consists in measuring the distance between terms not (necessarily) by a numerical value, but by a mathematical object which somehow reflects the interactive complexity, i.e. the type, of the compared terms. We exemplify this concept in the simply-typed lambda-calculus, and show a form of soundness theorem. We also see how ordinary logical relations and metric relations can be seen as instances of differential logical relations. Finally, we show that differential logical relations can be organised in a cartesian closed category, contrarily to metric relations, whichare well-knownnotto have such a structure, but only that of a monoidal closed category. We conclude the talk by hinting at other forms of differential program semantics, which are being studied as part of the DIAPASoN research project.

This work appeared at ICALP 2019 (link to the paper).



-------------------------------------------------------------------------------

Informal Workshop Micro Structures and Deduction (Micro-SD)

 
-------------- Program --------------

Wednesday 29 May (Room 3W 3.7)

  1:15  Adam Lassiter (Bath): Coalescence Proof Search for Classical Logic
  2:00  Michelangelo Mecozzi (Bath): Expansion Tree Proofs with Unification
  2:45  --- Coffee break ---
  3:15  Chris Barrett (Bath): A Subatomic Proof System for Decision Trees
  4:15  --- End ---

Monday 3 June (Room 8W 1.33)

  1:15  Delia Kesner (Paris 7): Principles of Quantitative Types for Higher-Order Languages
  2:15  David Sherratt (Jena): The Spinal Atomic Lambda-Calculus
  2:45  --- Coffee break ---
  3:15  Giulio Guerrieri (Bath): Standardization via Linear Logic for Call-by-Name, Call-by-Value and Choice Effects
  4:00  Benjamin Ralph (INRIA Paris): Towards a Combinatorial Proof Theory 
  4:45  --- End ---

--------------- Abstracts ---------------
 

Adam Lassiter (University of Bath) 

Coalescence Proof Search for Classical Logic

Coalescence is a highly efficient proof search method for additive linear logic, the logic of categorical sum and product. For a proof of a sequent over two formulas A and B, it is a linear algorithm on the 2-dimensional product space |A|x|B| over the subformulas |A| of A and the subformulas |B| of B. By design, the algorithm has no need for backtracking, and has built-in sharing (or memoization).

In this talk, I discuss how to adapt this method for classical logic. The starting point is the observation that a classical formula A can be proved by n-dimensional coalescence - that is, there is some n such that A can be proved by coalescence over the space |A|x|A|x...x|A| (with n copies of |A|).

Whether this algorithm is efficient depends on the growth of n. In a naive approach, this growth is far worse than expected. I will then discuss some natural ways of optimizing the algorithm.

-----

Michelangelo Mecozzi (Univeristy of Bath)

Expansion Tree Proofs with Unification

Expansion tree proofs are a representation of proofs in first-order classical logic that is focused on the witnessing terms to existential quantifiers, factoring out both propositional proof content (which is decidable) and the bureaucracy of sequent calculus. They are based on Herbrand's theorem, and have a natural game-semantic interpretation. There is also an interesting connection with learning algorithms.

Herbrand's theorem, however, makes no mention of explicit witnesses. In line with this, Hughes, Heijltjes, and Strassburger have recently given proof formalisms for fragments of first-order linear logic that omit witnessing terms, and instead reconstruct these by unification.

In this talk, I will apply this idea to expansion tree proofs, yielding a variant that omits witnessing terms, and discuss algorithms to reconstruct these.

-----

Chris Barrett (University of Bath)

A Subatomic Proof System for Decision Trees

Recently, several proof systems have been developed in a very principled and regular way according to a methodology called 'subatomic'. The main idea is to consider atoms as non-commutative self-dual connectives representing a superposition of truth values. This way, non-linear inference rules, such as contraction and cut, become linear and obey a unique scheme. Normalisation is, therefore, at the same time, simplified and generalised. The contribution of this talk is that, surprisingly, by completing in the most natural way the standard subatomic proof system for propositional classical logic, we obtain a new proof system for a conservative extension of propositional classical logic and of binary decision trees. Cut elimination for this system becomes an almost trivial variation of the already known (and also trivial) 'experiment method', which is essentially a truth tabling operation that generates canonical analytic proofs.

-----

Delia Kesner (Université de Paris, IRIF, CNRS)

Principles of Quantitative Types for Higher-Order Languages

Quantitative techniques are emerging in different areas of computer science (model checking, logic, automata theory, etc) to face the challenge of today's resource aware computation.

In this talk we discuss quantitative types, a.k.a. non-idempotent (intersection/union) types, which provide a natural tool to reason about resource consumption. Quantitative types are applicable to a wide range of powerful models of computation, including pure functional programming, pattern matching, control operators and infinitary computations.

We provide a clean theoretical understanding of the use of resources, and survey some recent semantical and operational results in the domain.

------

David Sherratt (University of Jena)

The Spinal Atomic Lambda-Calculus

We introduce a variant of Gundersen, Heijltjes, and Parigot's atomic lambda-calculus. This typeable calculus with explicit sharing extends the Curry-Howard interpretation of open deduction already established by the previous calculus. This expanded interpretation allows us to follow a duplication strategy stronger than full laziness, in the sense that we duplicate less and share more of a term. This new ability is strongly connected to interpreting the scopes of abstractions, which is strongly related to the open deduction system in turn. We establish the good operational behaviour of the language by means of some fundamental properties such as confluence and preservation of strong normalisation. We finally discuss the possible benefits of this calculus, which may lead to some remarkable breakthroughs.

------

Giulio Guerrieri (University of Bath)

Standardization via Linear Logic for Call-by-Name, Call-by-Value and Choice Effects

Linear Logic (LL) has proved powerful in providing insights and tools to analyze computational and denotational properties of programming languages. When considering the lambda-calculus, LL provides a common setting where both call-by-name (CbN) and call-by-value (CbV) evaluation mechanisms can be interpreted via Girard's translations. The bang calculus internalizes the insights coming from LL proof-nets into an untyped lambda-like syntax which subsumes both CbN and CbV. In a different sense, the idea of a system which subsumes both CbN and CbV had been already advocated and developed by Levy, via the Call-By-Push-Value paradigm; and indeed, the bang calculus can be seen as an untyped version of Levy's one. This approach has been mainly exploited semantically. Moreover, the Call-by-Push Value paradigm, as well as related polarized calculi, admits LL-based models featuring non-trivial computational effects such as non-deterministic or probabilistic computations.

We show that the bang calculus provides a powerful methodology to study syntactical properties of CbN and CbV, and their extension to calculi with a choice effect (such as probabilistic, non-deterministic, or quantum). In particular, we have a uniform method to study the fundamental syntactical results for CbN and CbV lambda-calculi: confluence, standardization and normalization, and their extension to calculi with effects. Our approach allows us not only to revisit some well-known results, but also to discover new ones, namely in studying calculi with effects.

-----

Benjamin Ralph (Inria Paris Saclay)

Towards a Combinatorial Proof Theory

Combinatorial proofs have been introduced by Hughes to give a ``syntax-free'' presentations for proofs in classical propositional logic. In a nutshell, a classical combinatorial proof consists of two parts: (i) a linear part, and (ii) a skew fibration. The linear part encodes a proof in multiplicative linear logic (MLL), whose conclusion is given represented in a cograph, while the skew fibration maps this linear cograph to the cograph of the conclusion of the whole proof.  In deep inference, this skew fibration corresponds exactly to a contraction-weakening derivation. Applying certain restrictions to these two rules leads to substructural logics and substructural proof theory. These proof theoretic restrictions correspond precisely to set-theoretic and graph-theoretic restrictions on the skew fibration, allowing us to characterise proof systems by their graph homomorphism class.

------------------------------


21st May (room 1W 3.107)

Cristina Matache (University of Oxford)
A Sound and Complete Logic for Algebraic Effects
(joint work with Sam Staton)

I will talk about definitions of program equivalence, for a language with algebraic effects in the style of Plotkin and Power. Program equivalence is a long-standing problem in computer science, made more difficult by the presence of higher-order functions and algebraic effects. In this talk I will present a logic whose formulas represent properties of effectful programs. The satisfaction relation of the logic induces a notion of program equivalence. Notably, the induced equivalence coincides with contextual equivalence (which equates programs with the same observable behaviour) and also with an applicative bisimilarity.

This is joint work with Sam Staton which appeared at FOSSACS 2019. (link to the article)

pdf


16th April

Alessio Santamaria (University of Bath)
Towards a Godement Calculus for Dinatural Transformations

Dinatural transformations, which have been used to interpret the second-order lambda calculus [1] as well as derivations in intuitionistic logic [2] and multiplicative linear logic [3], fail to compose in general; this has been known since they were discovered by Dubuc and Street in 1970 [4]. Many ad hoc solutions to this remarkable shortcoming have been found, but a general theory of compositionality was missing. The results Guy and I obtained during my PhD and that I’ll show in the talk are the following:

  1. Acyclicity of certain graphs associated to dinatural transformations is a sufficient and essentially necessary condition for two dinatural transformations to compose;
  2. This yields to the definition of a category whose objects are mixed-variant functors and morphisms are partially dinatural transformations;
  3. A new definition of horizontal composition for dinatural transformations embodying substitution of functors into transformations and vice-versa, intuitively reflected from the string-diagram point of view by substitution of graphs into graphs.

This work represents the first, fundamental steps towards a Godement-like calculus of dinatural transformations as sought—but never achieved—by Kelly in [5], with the intention then to apply it to describe coherence problems abstractly [6]. We have not managed to get such a complete calculus yet, but we are “well on our way” on the path paved by Kelly himself in [5] when he was assessing the simpler case of a many-variable fully covariant functorial calculus. What we have not done yet will be the subject of future work.

pdf

[1] Bainbridge, E.S., Freyd, P.J., Scedrov, A. and Scott, P.J., 1990. Functorial polymorphism. Theoretical Computer Science [Online], 70(1), pp.35–64. Available from: https://doi.org/10.1016/0304-3975(90)90151-7.

[2] Girard, J.-Y., Scedrov, A. and Scott, P.J., 1992. Normal Forms and Cut-Free Proofs as Natural Transformations. In: N.M. Yiannis, ed. Logic from Computer Science, [Online]. Mathematical Sciences Research Institute Publications. Springer, New York, NY, pp.217–241. Available from: https://doi.org/10.1007/978-1-4612-2822-6_8.

[3] Blute, R., 1993. Linear logic, coherence and dinaturality. Theoretical Computer Science [Online], 115(1), pp.3–41. Available from: https://doi.org/10.1016/0304-3975(93)90053-V.

[4] Dubuc, E. and Street, R., 1970. Dinatural transformations. In: S. Mac Lane, H. Applegate, M. Barr, B. Day, E. Dubuc, A.P. Phreilambud, R. Street, M. Tierney and S. Swierczkowski, eds. Reports of the Midwest Category Seminar IV, [Online]. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg, pp.126–137. Available from: https://doi.org/10.1007/BFb0060443.

[5] Kelly, G.M., 1972. Many-variable functorial calculus. I. In: G.M. Kelly, M. Laplaza, G. Lewis and S. Mac Lane, eds. Coherence in Categories, [Online]. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg, pp.66–105. Available from: https://doi.org/10.1007/BFb0059556.

[6] Kelly, G.M., 1972. An abstract approach to coherence. In: G.M. Kelly, M. Laplaza, G. Lewis and S. Mac Lane, eds. Coherence in Categories, [Online]. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg, pp.106–147. Available from: https://doi.org/10.1007/BFb0059557.


19th March

Nicolas Wu (University of Bristol)
The Scope of Algebraic Effect Handlers

Algebraic effect handlers are a methodology for dealing with effectful programs in a modular way. They build on the free monad for a functor as means of describing a syntax tree. An interpretation of syntax is provided by a handler, which folds over the structure with an algebra. Handlers themselves are not always algebraic operations, which can lead to surprising properties of programs written in this style. This talk presents scoped algebraic effect handlers, which allow a certain class of handlers to be expressed algebraically.


12th March

Luc Pellissier (Université Paris Diderot)
Fibrations, approximations and intersection type systems
(joint work with Damiano Mazza and Pierre Vial)

We present calculi (intuitionistic or classical) in a bioperadic setting, and, following Melliès and Zeilberger (2015), represent type systems as morphisms of such structure.

Programs in such calculi can be approximated arbitrarily well by linear programs, that use their argument exactly once. In this work, we show that such linear approximations give rise, through an adapted Grothendieck construction, to intersection type systems for any logical system that can be embed meaningfully into linear logic.

We recover in this way all well-known intersection type systems, capturing different notions of normalization for various languages, and give a blueprint to compute new ones.

pdf


26th February

Willem Heijltjes (University of Bath)
Intuitionistic Proofs without Syntax
(joint work with Dominic Hughes and Lutz Straßburger)

Hughes's Combinatorial Proofs are an intriguing, purely graphical notion of proof for propositional classical logic. Here, we answer the question: what is their intuitionistic counterpart? Our result, Intuitionistic Combinatorial Proofs (ICPs), are a concrete geometric semantics of intuitionistic proof that reveal many interesting connections: to game semantics, deep inference, supercombinator reduction, and more.

The technical development starts with a graphical representation of formulas, in the style of arenas in game semantics, and co-graphs and coherence spaces in classical and linear logic. Unlike game semantics, where arenas are most often tree-like, our arenas are DAG-like.

ICPs, like their classical counterparts, are then composed of two fragments: a linear fragment called an "arena net", and a contraction-weakening fragment called a "skew fibration". An arena net is a proof net in intuitionistic multiplicative linear logic (IMLL), but over an arena rather than a formula. A skew fibration is a particular graph homomorphism between the arena net and the arena for the conclusion formula of the proof. 

In contrast with linear logic, where linear/non-linear factorization is at the formula level through the bang operator, ICPs factor at the level of proofs, like Herbrand's Theorem for first-order classical logic, and like proof stratification in deep inference. An ICP can be viewed as a Hyland-Ong strategy with additional sharing.

ICPs abstract fully and faithfully over all non-duplicating permutations of the sequent calculus, a result we call "Polynomial Full Completeness" that is analogous to the Hughes and Heijltjes's recent "Local Canonicity" result for MALL. By contrast, lambda-calculus and game semantics incur an exponential blowup. This manifests in ICPs as a natural form of sharing, induced by the DAG-form of arenas.

We give a simple and direct normalization relation that arises naturally from the semantics. It relates to closed reduction in lambda-calculus (a redex may only reduce if it contains no free variables) and supercombinator reduction (as used in the implementation of Haskell), techniques originally devised for reasons of efficiency. Intriguingly, the sharing in ICPs need not be unfolded during normalization, and instead the shared subterm is evaluated first, similarly to call-by-need reduction.

pdf


12th February

David Sherratt (University of Bath)
Exploring the limits of full laziness

For my PhD I introduce a variant of Gundersen, Heijltjes, and Parigot's atomic lambda calculus. This typeable calculus with explicit sharing extends the Curry-Howard interpretation of open deduction already established by the previous calculus. This expanded interpretation allows us to follow a duplication strategy stronger than full laziness, in the sense that we duplicate less and share more of a term. We call this optimal laziness, and explain the different levels of laziness and how they relate to implementations, including optimal graph reduction in the style of Lamping.

pdf


15th January

Andrea Aler Tubella (University Paris Diderot)
Decomposing proof systems with contraction

We present a general normalisation theory for logics with contraction. We show that in a wide variety of proof systems fulfilling some general conditions we can separate every proof into two phases: a linear phase followed by a phase made up only of contractions.
By combining these results together with splitting, a generalised cut-elimination theory for linear proof systems, we are therefore presenting a modular normalisation theory encompassing many existing logics.


2018


20th November

Giulio Guerrieri (University of Bologna)
Types of Fireballs 

The good properties of Plotkin's call-by-value lambda-calculus crucially rely on the restriction to weak evaluation and closed terms. Open call-by-value is the more general setting where evaluation is weak but terms may be open. Such an extension is delicate and the literature contains a number of proposals. Recently, we provided operational and implementative studies of these proposals, showing that they are equivalent with respect to termination, and also at the level of time cost models. This paper explores the denotational semantics of open call-by-value, adapting de Carvalho's analysis of call-by-name via multi types (aka non-idempotent intersection types). Our type system characterises normalisation and thus provides an adequate relational semantics. Moreover, type derivations carry quantitative information about the cost of evaluation: their size bounds the number of evaluation steps and the size of the normal form, and we also characterise derivations giving exact bounds. The study crucially relies on a new, refined presentation of the fireball calculus, the simplest proposal for open call-by-value, that is more apt to denotational investigations.

pdf


30th October

Georg Moser (University of Innsbruck)
Epsilon Calculus and Herbrand Complexity (the case with equality)

Hilbert's epsilon calculus is based on an extension of the language of
predicate  logic by  a term-forming  operator $\varepsilon_{x}$.   Two
fundamental results about  the epsilon calculus, the  first and second
epsilon theorem, play  a r\^ole similar to  that which cut-elimination
or   decomposition  techniques   in  other   logical  formalisms.   In
particular,  Herbrand's  Theorem  is  a  consequence  of  the  epsilon
theorems.  In the talk I will present the epsilon theorems and present
new results  (for the  case with  equality) on  the complexity  of the
elimination procedure underlying their proof, as well as the length of
Herbrand  disjunctions  of  existential   theorems  obtained  by  this
elimination procedure.


13 June

James Davenport (University of Bath)
Category Theory of Computer Algebra

Computer algebra systems started off manipulating polynomials over the integers, or groups of permutations. As they moved into greater ranges of data structures, the need for underpinning theory became felt, and Scratchpad II (later Axiom) was the first system to use Universal Algebra and Category Theory for this.  These days, other systems such as Sage and Magma also do this.  This Talk explains how Axiom’s category structure works, and some of the challenges posed by constructivity. 


22 May

Fabio Zanasi (University College of London)
Algebraic methods for network diagrams and component-based systems

Diagrammatic languages are used in diverse fields of science to specify and study systems based on interacting
components. Graphics outperforms textual information in highlighting connectivity and resource-exchange between parts of
a system. This makes diagrammatic languages particularly effective in the analysis of subtle interactions as those found in
cyber-physical, concurrent and quantum systems.

In recent years "algebraic network theory" emerged in theoretical computer science as a unifying mathematical framework in which diagrammatic languages are given a completely formal status and are studied using the compositional methods of algebraic program
semantics. Nowadays the algebraic approach founds application in fields as diverse as quantum theory, linguistics, Bayesian probability, concurrency theory and the analysis of signal processing, electrical and digital circuits.

In this talk I give a general overview of the approach, focussing on signal processing theory as case study. Towards the end I will touch on some current research threads, including structural operational semantics for string diagrams, diagram rewriting implementation, and diagrammatic reasoning in Bayesian inference.


15 May

Joe Paulus (University of Bath)
Deep Inference Intersection Types

In this walk we investigate typing the basic calculus, a linear calculus with explicit sharing. We do this in open deduction formalism which is a combination of the sequent calculus and natural deduction. In the process we made the key innovation of multiset derivations in open deduction. 


8 May

James Laird (University of Bath)
Genericity Meets Dinaturality: The Semantics of Bounded Quantification

Bounded quantification (exemplified by the typing system Fsub) combines parametric polymorphism with subtyping, increasing its capacity to capture genericity and modularity of code by representing phenomena such as inheritance.

We describe  a denotational semantics for bounded quantification, specifically:
- A general construction of a model of Fsub from a model of System F (the second order lambda-calculus) which has certain dinaturality properties. (A  "converse" to the observation of Cardelli et. al. that instantiation of bounded quantifiers should be dinatural in subtype coercions.)
-A category of polymorphic games with the required dinaturality properties. 
-A game semantics  based on the above, for a programming language in which programs are stateful objects with bounded polymorphic types. In this model, full abstraction holds for "concretely bounded" types, but fails in general, arguably pointing to a gap in the expressiveness of existing typing systems for bounded polymorphism.   


20 March

Zak Tonks (University of Bath)
Fast Matrix Inversion in Computer Algebra

James R. Bunch and John E. Hopcroft improved upon Strassen’s (1969) method for matrix inversion via fast matrix multiplication in 1974. Bunch–Hopcroft handled the case in which principal submatrices are singular, and presented a modification for providing LU factorisation via this scheme. Such fast matrix multiplication techniques based on Strassen’s method recurse via 7 multiplications on submatrices instead of the naive 8, and such methods achieve a worst case complexity of O(n^ω) where ω = log2 7.

However, Bunch–Hopcroft’s method assumes that the input matrix is over a field — in particular the recursive nature of the algorithm requires that certain elements and sub-determinants are invertible. But this is not always true of a ring, and in doing Computer Algebra we are most interested in rings of polynomials. In this presentation a fraction free formulation of the algorithm is presented that is most suitable for (dense) matrices of sparse polynomials, where the intention is that such a method should be more efficient than interpolation methods in this case. In such a way, it is attempted to provide for these matrix inversion methods what Bareiss–Dodgson did for Gaussian Elimination. 


9 January

Vladimir Dotsenko (Trinity College Dublin)
Pre-Lie Algebras and F-manifolds

The geometric notion of an F-manifold was introduced by Hertling and Manin as a natural generalisation of the celebrated notion of a Frobenius manifold. Pointwise, the structure of an F-manifold leads to an algebraic structure on tangent spaces that is a weakened version of a Poisson algebra, that is a commutative associative product and a Lie bracket that are compatible in some way that is slightly weaker the Leibniz identity for Poisson algebras. Poisson algebras can be thought of as "infinitesimal associative algebras". The slogan of this talk is that "F-manifold algebras are infinitesimal pre-Lie algebras", relating them to a remarkable algebraic structure discovered independently by Gerstenhaber, Koszul and Vinberg in the 1960s. I shall outline the proof, which relies on a mixture of theory of rewriting systems and homotopical algebra.


2017


28 November

Nobuko Yoshida (Imperial College of London)
Linear Logic and Session Types

In this talk, we first outline recent activities in our mobility group in 

Department of Computing, Imperial College London. 
Then we talk about the following work on Linear Logic and Session Types. 
Linear logic has long been heralded as a potential model for
concurrency: from Girard's original paper, to Abramsky's
computational interpretation, reiterated by Bellin and
Scott. More recently, an interpretation for intuitionistic linear
logic has been given by Caires and Pfenning where propositions
are viewed as session types - a well established typing
discipline for concurrency - proofs as processes and proof
reduction as inter-process communication.

In this talk  we will detail how several generalisations and
extensions of this interpretation arguably form a basis for a
logical foundation that captures several interesting features of
message-passing concurrent computation.  Specifically, we will
detail how the basic interpretation can be extended to richer
typed settings such as polymorphism and dependent type theories
and how to account for a meaningful notion of typed process
equivalence that gives meaning to both proof conversions and type
isomorphisms.


21 November

Hugo Paquet (University of Cambridge)
Probabilistic Concurrent Game Semantics

Game semantics has been very successful at modelling the language PCF and various extensions. In particular, there is a fully abstract games model (Danos-Harmer 2002) for Probabilistic Algol, an extension of PCF with probability and ground references. But it proved difficult to study Probabilistic PCF itself, without references, in this context. 


 I will show how concurrent games, based on event structures, provide a new framework for probabilistic game semantics. In this setting we can express "probabilistic innocence", the property characterising Probabilistic PCF programs. We will see that concurrent games also allow us to also interpret a concurrent version of Probabilistic PCF, in which some programs can be evaluated in parallel. Finally, I will talk about a new direction in probabilistic game semantics, aiming at supporting languages with continuous probability distributions.


7 November

Pawel Sobocinski (University of Southampton)
Graphical Linear Algebra

Graphical linear algebra is a string diagrammatic alternative to the usual  mathematical treatment of linear algebra (vector spaces, matrices and all that). Like process algebra, it is both a specification language and an implementation language, giving a new way "operational" way to approach this classical and uniquely important mathematical subject. The language enjoys a sound and complete axiomatisation (called the theory of Interacting Hopf Algebras), making it an attractive alternative calculus in which one can perform elementary computations of linear algebra.

Finally, string diagrams–like classical syntax–can be equipped with an operational semantics, becoming an extension Shannon's signal flow graphs, connecting process algebraic notions with classical control and systems theory.


31 October

Lutz Straßburger (INRIA Saclay - Ile de France
Combinatorial Flows

In this talk I will introduce combinatorial flows as a generalization of Hughes' combinatorial proofs. I will show how they can cover cut and substitution as methods of proof compression, and how cut and substitution are eliminated. Finally, I will show how proofs in syntactic formalisms like deep inference or sequent calculus are translated into combinatorial flows and vice versa.



7 June

Pierre Clairambault (ENS de Lyon)
A compositional account of Herbrand's theorem via concurrent games

Herbrand's theorem, often regarded as a cornerstone of proof theory, exposes some of the constructive content of classical logic. In its simplest form, it reduces the validity of a first-order purely
existential formula  ∃x.φ(x) (with φ quantifier-free) to that of a finite disjunction φ(t₁) ∨ ... ∨ φ(tₙ). More generally, it gives an education of first-order validity to propositional validity, by understanding the structure of the assignment of first-order terms to existential quantifiers, and the causal dependency between quantifiers.

In this talk, I will show that Herbrand's theorem in its general form can be elegantly stated as a theorem in the framework of concurrent games. The causal structure of concurrent strategies, paired with annotations by first-order terms, is used to specify the dependency between quantifiers. Furthermore concurrent strategies can be composed, yielding a compositional proof of Herbrand's theorem, simply by interpreting classical sequent proofs in a well-chosen denotational model. I assume no prior knowledge of Herbrand's theorem or concurrent games.

This is joint work with Aurore Alcolei, Martin Hyland and Glynn Winskel.


6 June

John Gowers (University of Bath)
Sequoidal Categories and Transfinite Games: A Coalgebraic Approach to Stateful Objects in Game Semantics

The non-commutative sequoid operator ⊘ on games was introduced to capture algebraically the presence of state in history-sensitive strategies in game semantics, by imposing a causality relation on the tensor product of games. Coalgebras for the functor A ⊘ _ - i.e., morphisms from S to A ⊘ S - may be viewed as state transformers: if A ⊘ _ has a final coalgebra, !A, then the anamorphism of such a state transformer encapsulates its explicit state, so that it is shared only between successive invocations.

When we use this construction to model stateful objects in game semantics, we are using two properties of the game !A: firstly, that it is the final coalgebra for the functor A ⊘ _ (so that we may use it to encapsulate state) and secondly, that it is a model for the exponential from linear logic (specifically, it is the cofree commutative comonoid over A).  I will investigate the underlying reasons why the same object carries both structures by using the notion of a sequoidal category, which generalizes and formalizes the sequoid operator from games, and discussing what extra conditions we need to place on it in order to guarantee that the final coalgebra for A ⊘ _  has the same carrier as the cofree commutative comonoid over A.  

I will give two different situations in which we can make this conclusion.  The first is based upon the Melliès-Tabareau-Tasson formula for the cofree commutative comonoid in the situation where it can be constructed as a sequential limit of symmetrized tensor powers.  The second approach adds the extra axiom that a certain isomorphism from !(A × B) to !A ⊗ !B is an isomorphism.  This second condition always holds in the case that !A is a bifree algebra for A ⊘ _, but in general it is necessary to impose it, as we establish by giving an example of a sequoidally decomposable category of games in which plays will be allowed to have transfinite length. In this category, the final coalgebra for the functor A ⊘ _ is not the cofree commutative comonoid over A: we illustrate this by explicitly contrasting the final sequence for the functor A ⊘ _ with the chain of symmetric tensor powers used in the construction by Melliés, Tabareau and Tasson.


23 May

John Power (University of Bath)
One-dimensional generalisations of the notion of category

 A few weeks ago, I gave a seminar on joint work with Richard Garner that was based upon categories enriched in bicategories. However, I only outlined a definition and only briefly mentioned the context in which they arise. So I plan to give an expository talk outlining a few of the one-dimensional generalisations of the notion of category that have been studied over the past sixty years, with an idea of how they arose and how they relate to each other.

 

16 May

Alessio Guglielmi (University of Bath)
Sabbatical Report


In the past year we have obtained several results in deep inference, helped by a productive sabbatical for me. In my opinion, it is time to make an effort and push deep inference into the mainstream.

In fact, I just found out that, contrary to what I have been thinking so far, deep inference has dramatic, positive consequences on the complexity of proofs above propositional logic. Another reason is that we have now a theory of normalisation that is more powerful and is conceptually clearer than the traditional one.

I will give a high-level presentation of the main results of the past year, which are: 1) subatomic logic and a common structure behind normalisation procedures; 2) elimination of cycles and its impact on substitution theory; 3) a generalised understanding of quantification and its impact on the complexity of proofs.

There are opportunities for several projects that extend or apply those results, in particular on cyclic proofs, process algebras, the epsilon calculus, the Curry-Howard isomorphism (on which I recently changed my mind a bit), and more. It would be helpful for me to get feedback on what the best strategy for future grant applications could be.

I have been told by many experienced colleagues that what is needed now, in order to make deep inference popular, is a textbook. I am convinced that this is necessary and I believe that the best way is to found the new theory starting from the notion of atomic flow. I will tell you what I think the atomic flows can and should do in terms of foundations, but I need your help to approach them from the categorical point of view. (A categorical understanding of atomic flows is necessary because they basically are string diagrams, which are seemingly becoming the foundation for quantum theories, which in turn might benefit from the compression and computation properties of deep inference.)


2 May

John Power (University of Bath)

From finitary monads to Lawvere theories: Cauchy completions
(joint with Richard Garner)

The two main category theoretic formulations of universal algebra are Lawvere theories and finitary (= filtered-colimit preserving) monads on Set. The usual way in which to construct a Lawvere theory from a finitary monad T is by considering the opposite of the restriction of the Kleisli category Kl(T) to finite sets or equivalently natural numbers. Richard Garner recently found a different formulation of this, using the notion of Cauchy completion of a finitary monad qua monoid, i.e., qua one-object V-category, in V, where V is the monoidal category [Set_f,Set], equivalently the monoidal category [Set,Set]f of filtered-colimit preserving functors from Set to Set. 
Both finitary monads (easily) and Lawvere theories (with more effort) extend from Set to arbitrary locally finitely presentable categories. So last year in Sydney, Richard and I explored the extension of his construction via Cauchy completions. That works most naturally if one does it in a unified way, i.e., not for one locally finitely presentable category at a time, but for all simultaneously, using the notion of W-category for a bicategory W.
I shall talk about as much of this as we can reasonably handle: it is work in progress, so I have not fully grasped it myself yet, and there is much to absorb, e.g, the concepts of Cauchy completion and categories enriched in bicategories. The emphasis will very likely be on Richard's work rather than the work we did jointly.  

 

28 March

John Power (University of Bath)

Logic programming: laxness and saturation
(joint with Ekaterina Komendantskaya)

A propositional logic program P may be identified with a (Pf Pf)-coalgebra on the set of atomic propositions in the program. The corresponding C(Pf Pf)-coalgebra, where C(Pf Pf) is the cofree comonad on Pf Pf, describes derivations by resolution.That correspondence has been developed to model first-order programs in two ways, with lax semantics and saturated semantics, based on locally ordered categories and right Kan extensions respectively. We unify the two approaches, exhibiting them as complementary rather than competing, reflecting the theorem-proving and proof-search aspects of logic programming.

While maintaining that unity, we further refine lax semantics to give finitary models of logic progams with existential variables, and to develop a precise semantic relationship between variables in logic programming and worlds in local state.

 

7 March

Nicolai Vorobjov (University of Bath) 
Lindemann's Theorem and Schanuel's Conjecture

Schanuel's conjecture is the central, yet unsettled, proposition in transcendental number theory. Most of the known results (like famous Lindemann-Weierstrass theorem) and other conjectures in this theory follow from Schanuel. The conjecture turned out to be useful in model theory/analytic geometry, for example Macintyre and Wilkie proved Tarski's conjecture on decidability of the theory of the reals with exponentiation, assuming Schanuel. Recently I studied, with C. Riener, the structure of irreducible components of real exponential sets, using Schanuel. In this talk I will remind the basics of transcentental numbers, formulate the conjecture and some of it's consequences.


28 February

John Power (University of Bath)
Clubs, enrichment and weak n categories: a progress report

In the early 1960's, Jean Benabou proposed the notion of bicategory: a bicategory bears the same relationship to a 2-category as a monoidal category bears to a strict monoidal category, a monoidal category being a one-object bicategory and a strict monoidal category being a one-object 2-category.

It is not difficult to generalise the notion of 2-category to one of n-category for arbitrary n, but generalising the notion of bicategory has been a focus of research in both category theory and algebraic topology since the late 1980's, and it remains so now. Here, I report on progress that Soichiro Fujii, Thomas Cottrell and I have made over the past few months, as we have gradually come to understand and, to some extent, reorganise the approach taken by Michael Batanin in Australia and Tom Leinster in Scotland. 

Our work is not yet complete, but we believe it explicates at least part of the underlying structure more clearly than hitherto done by using an enriched category theoretic perspective


21 February

Marco Solieri (University of Bath)
The good, the bad and the ugly
Sharing, superposition and expansion in lambda terms and proof nets

Models of efficient implementation and fine semantics of higher order programming languages both need to include a formal account of duplication. Once deconstructed, duplication can then be:

* avoided - as in the static normalisation of proof nets given by Girard's geometry of interaction (GoI);
* anticipated - as in the linearisation of lambda terms given by Taylor-Ehrhard-Regnier expansion;
* postponed - as in the optimal implementation of the lambda calculus given by Lamping's sharing graphs (SG).

How the GoI and the Taylor expansion are related? Is the optimal implementation efficient? In this talk I will survey two of the contributions I obtained in my doctoral investigations, and sketch some the directions of my current or prospective interest.

I will first introduce the geometry of resource interaction, a GoI for the resource lambda calculus, that is the linear and non-deterministic variation of the ordinary one being the domain of Taylor expansion. The algebraic structure which implements computation on paths within a term/proof correspond essentially to the multiplicative portion of the latter, enriched with a superposition operator. An expanded version of Girard's execution formula can then be easily formulated and shown to be invariant under reduction for ground-typed closed terms.

Secondly, I will recall that the only general (i.e. not empirical) result about the complexity of SG is restricted to two variants of linear logic proof-nets, ELL and LLL, which characterise two time
complexity classes, respectively elementary and polynomial. In these cases, Baillot and Dal Lago have shown exploiting a GoI-like approach that the complexity of SG remains in such complexity classes. In the same setting, and together with Guerrini, I obtained a direct cost comparison between SG and the proof net reduction by purely syntactical means. A simulation between the two reductions allows to establish that a stronger bound: a quadratic function.

 

7 February

Noam Zeilberger (University of Birmingham)
A Categorical Perspective on Type Refinement Systems

A "type refinement system" is a type system built on top of a typed programming language, as an extra layer of typing. Type refinement systems in this sense have become increasingly popular, as a lightweight mechanism for improving the correctness of programs.  In the talk, I will give an introduction to a categorical perspective on type refinement systems that I have been developing in collaboration with Paul-André Melliès, based on the simple idea of modelling a type refinement system as an "erasure" functor from a category of typing derivations to a category of terms.  Some questions one can consider from this perspective include:

* What does it mean for a program to have more than one type? What does it mean for a typing judgment to have more than one derivation?

* How should we understand the so-called "subsumption rule"?

* If functors are type refinement systems, what does it mean for a functor to be a Grothendieck (bi)fibration?

A particular class of type refinement systems that are especially natural from this perspective are ones coming from a strict monoidal closed functor that is simultaneously a bifibration. I will give some examples illustrating how such type refinement systems can be used to give an axiomatic account of some phenomena from the semantics of separation logic and lambda calculus.


24 January

Koko Muroya (University of Birmingham)
Dynamic Geometry of Interaction machine: call-by-need graph rewriter

Girard's Geometry of Interaction (GoI), that is semantics of Linear Logic proofs, has been applied to program semantics in mainly two styles. One style yields graph rewriting systems for the lambda-calculus in which GoI gives an invariant of rewriting. The other style produces abstract machines that pass a token on a fixed graph along a path indicated by GoI. These styles of GoI in program semantics handle duplication of computation differently with linear logic as a back end, and consequently can be efficient in different ways.
The graph-rewriting GoI achieves time efficiency by copying subgraphs, whereas the token-passing GoI is space efficient by repeating moves of a token in a fixed (sub)graph. Aiming at exploring this spectrum of space and time efficiency, we introduce an abstract machine called Dynamic GoI Machine (DGoIM). It combines graph rewriting with token passing using a history of token passing.
We prove that the DGoIM can implement the call-by-need evaluation by interleaving token passing with as much graph rewriting as possible. Finally, we explore the tradeoffs of space and time cost in the DGoIM, by comparing it with a variant of Danvy et al.'s call-by-need storeless abstract machine.
The quantitative analysis confirms that these two machines have the same space efficiency (up to constant factors) and the DGoIM is more time efficient than the storeless abstract machine.

 

2016 


19 December

Thomas Strum (CNRS, France and MPI for Informatics, Germany)
Beautiful Decision Methods and Adventurous Heuristics for Solving Problems over the Reals 

Effective quantifier elimination procedures for first-order theories provide a powerful tool for generically solving a wide range of problems based on logical specifications. In contrast to general first-order provers, quantifier elimination procedures are based on a fixed set of admissible logical symbols with an implicitly fixed semantics. This admits the use of subalgorithms from symbolic computation. We are going to start with traditional quantifier elimination applied to verification and simple problems from the life sciences. Beyond quantifier elimination we are going to discuss recent results on an incomplete decision procedure for the existential fragment of the reals, which has been successfully applied to the analysis of reaction systems in chemistry and in the life sciences, which scales to models currently used in systems biology. We might mention our open-source computer logic software Redlog, where our methods are implemented (www.redlog.eu).

 

29 November

James Brotherston (University College London)
Biabduction (and Related Problems) in Array Separation Logic

I describe array separation logic (ASL), a variant of separation logic in which the data structures are either pointers or arrays. This logic can be used, e.g., to give memory safety proofs of imperative array programs. 

The key to automatically inferring specifications is the so-called "biabduction" problem, given formulas A and B, find formulas X and Y such that

A * X |= B * Y

(and such that A * X is also satisfiable), where * is the well-known "separating conjunction" of separation logic.  We give an NP decision procedure for this problem that produces solutions of reasonable quality, and we also show that the problem of finding a consistent solution is NP-hard. 

Along the way, we study satisfiability and entailment in our logic, giving decision procedures and complexity bounds for both problems. 

This is joint work with Nikos Gorogiannis (Middlesex) and Max Kanovich (UCL).  A paper describing the work is available at https://arxiv.org/abs/1607.01993 .


22 November

Thomas Cottrell (University of Bath)
Operads, generalised operads, and weak n-categories

Operads are tools for defining algebraic structures in terms of the operations they have. In this talk, I will describe the classical case of operads, in which each operation has a natural number of inputs, called its arity. I will then explain how to generalise this definition to allow for operads with more complex shapes of inputs. Finally, I will show how these generalised operads can be used to define weak n-categories, a very general type of higher-dimensional category.


15 November

Soichiro Fujii (The University of Tokyo)
Generalized Global States

From the outset, the global state has been among the leading examples in the algebraic approach to computational effects. Indeed, the approach itself started from the recognition that the global state admits a computationally natural presentation in terms of operations and equations.

In this talk, I attempt to shed new light on the global states by introducing a new class of computational effects which I tentatively call `generalized global states'. First, I explain that the now standard presentation of the global state monad on Set (in terms of the update and lookup operations) is a particular instance of a much more general phenomenon, whose first appearance essentially dates back to Lawvere’s thesis. Second, I present a unified operational semantics for generalized global states and state a relationship to Plotkin and Power's operational semantics based on effect values.


1 November

Martín Escardó (University of Birmingham)
Continuity in type theory

The formulation in Martin-Loef type theory of a Brouwerian continuity principle, saying that all functions (N->N)->N are continuous, via the so-called Curry-Howard interpretation of logic, turns out to be inconsistent. It becomes consistent under the univalent interpretation of logic, which is similar to that of topos logic. In particular, the notion of existence is interpreted as something strictly weaker than that Martin-Loef's Sigma type, but stronger than its classical manifestation as the negation of a universal quantifier. In fact, the original paper by Howard already points out that there are two natural constructive notions of existence (a weak and a strong one).


25 October

Pedro Henrique Carrasqueira (CLE, University of Campinas, Brazil)
Some introductory remarks to paraconsistency and logics of formal inconsistency 

A logic is paraconsistent if inconsistencies do not imply triviality in it. Paraconsistent logics are thus suited for reasoning in the presence of ineliminable inconsistencies, and they produce non-trivial but inconsistent theories. There are two main philosophical traditions of studies of inconsistency and paraconsistent logic. One of them advocates for the position known as dialetheism: that some contradictions are true, or, equivalently (given classical negation), that some propositions are both true and false. There is, however, another, somewhat earlier tradition, which takes a very different approach to the matter of inconsistency. This tradition assumes, instead, paraconsistency to be a common trait of logics that are in some sense appropriate for situations of imperfect rationality or imperfect information. In particular, their research focuses on logics in which the presence of disagreement or misinformation can be expressed by a formal language itself. Collectively, the various logics developed by this tradition and having such a property are known as logics of formal inconsistency. In my talk I shall briefly discuss the main differences that keep apart these two traditions of paraconsistency, with special attention to the different theoretical and practical problems they aim to solve. Then I shall focus on logics of formal inconsistency, taking the logic called mbC as my main example of the kind of behavior such logics exhibit. I shall end with some remarks on the ongoing research in this tradition of paraconsistency, as well as on its possible further developments.


4 October

David Sherratt (University of Bath) 
Towards an atomic abstract machine

The atomic lambda-calculus is known for its efficiency when evaluating terms; it is naturally fully lazy. Abstract machines are used to administer evaluation strategies. In this talk, I will discuss the idea of building an atomic abstract machine, implementing the evaluation strategy for the atomic lambda-calculus, with the intent of developing a fully lazy abstract machine and proving it to be effective.  

Alessio Santamaria (University of Bath)
Looking for a categorical notion of substitution of atomic flows

In this talk I present the work I have done in my first year: trying to give a categorical understanding of atomic flows in order to study their algebra - in the technical sense of the term, that is the operations we can do with them. Apart from plugging together and juxtaposing two flows, Guglielmi et al. proposed in an unpublished manuscript a notion of substitution of a flow inside a connected component of another, an operation which would generalise the usual notion of substitution of a formula inside the occurrences of an atom in another formula to that of substitution of a derivation inside the occurrences of an atom in another derivation. Categorically speaking, it seems that their idea can be formalised in terms of horizontal composition of families of morphisms, with which we interpret atomic flows. I will show how we generalised the well known definition of horizontal composition of natural transformations for a larger class of families of morphisms, namely extranatural transformations, in a meaningful way and what we have in mind to do for families of morphisms which are obtained by composing natural and extranatural transformations, but themselves are neither of them.


12 May

Giulio Manzonetto (Université Paris XIII)
New Results on Morris's Observational Theory --- The Benefits of Separating the Inseparable

We study the theory of contextual equivalence in the untyped lambda-calculus, generated by taking the normal forms as observables. Introduced by Morris in 1968, this is the original extensional lambda theory H+ of observational equivalence. On the syntactic side, we show that this lambda-theory validates the omega-rule, thus settling a long-standing open problem. On the semantic side, we provide sufficient and necessary conditions for relational graph models to be fully abstract for H+. We show that a relational graph model captures Morris's observational pre-order exactly when it is extensional and lambda-König. Intuitively, a model is lambda-König when every lambda-definable tree has an infinite path which is witnessed by some element of the model.


27 April

Pino Rosolini (University of Genoa)
Frames from topology, algebraically

We describe a connection between frames and algebras for the double exponential monad on the Sierpinski space. Instrumental for the presentation is Dana Scott's category Equ of equilogical spaces. We present a subcategory of Equ, closed under the double exponential monad, on which the category of algebras is equivalent to that of frames (and frame homomorphisms). I hope to connect this with Taylor's work on Abstract Stone Duality.

This is joint work with Giulia Frosoni and Alessio Santamaria.


20 April

Jamie Vicary (University of Oxford)
Geometrical Proofs for Linear Logic

Linear logic is a fundamental way to reason about resources that cannot be duplicated or deleted. In this talk, I will present a new approach to the proof theory of linear logic, in which proofs are represented as surfaces embedded in 3-dimensional space. Proof equivalence then has a simple definition: two proofs are logically equivalent just when their surfaces are geometrically equivalent. The technical basis for the work comes from higher category theory, and I will give a simple and accessible introduction to this.


19 April

Paul Harrenstein (University of Oxford)
Expressiveness and Nash Equilibrium in Iterated Boolean Games

We introduce and investigate a novel notion of expressiveness for temporal logics that is based on game theoretic properties of multi-agent systems. We focus on iterated Boolean games, where each player has a goal, represented using (a fragment of) Linear Temporal Logic (LTL). This goal captures the player's preferences: the models of the goal represent system behaviours that would satisfy the player. Moreover each player is assumed to act strategically, taking into account the goals of the other players, in order to bring about computations satisfying their goal. In this setting, we apply the standard game-theoretic concept of Nash equilibria: the Nash equilibria of an iterated Boolean game can be understood as a (possibly empty) set of computations, each computation representing one way the system could evolve if players chose strategies in Nash equilibrium. Such an equilibrium set of computations can be understood as expressing a temporal property—which may or may not be expressible within a particular LTL fragment. The new notion of expressiveness that we study is then as follows: what LTL properties are characterised by the Nash equilibria of games in which agent goals are expressed in fragments of LTL? We formally define and investigate this notion of expressiveness and some related issues, for a range of LTL fragments.

 

12 April

Harry Gunn (University of Bath, Masters Student)
Nature-based Cryptography

This is a review of several papers by G. Grigoriev and V. Shpilrain on a novel approach to public key cryptography.  These authors write: 

"We use various laws of classical physics to offer several solutions of Yao’s millionaires’ problem without using any one-way functions. We also describe several informationally secure public key encryption protocols, i.e., protocols secure against passive computationally unbounded adversary. This introduces a new paradigm of decoy-based cryptography, as opposed to “traditional” complexity-based cryptography. In particular, our protocols do not employ any one-way functions.”


5 April

Sam Staton (University of Oxford)
Semantics for probabilistic programming

I'll talk about the semantics of probabilistic programming languages. This is an old subject, but recently probabilistic programming has attracted a lot of interest as a method of statistical modelling, through languages like Anglican and Church. These raise some new problems, such as how to combine continuous distributions with higher types. I'll describe our work on operational semantics and denotational semantics (based on sheaves and measurable spaces). 


15 March

John Power (University of Bath) 
Category theoretic semantics for theorem proving in logic programming: embracing the laxness
(joint with Ekaterina Komendantskaya)

A propositional logic program P may be identified with a PfPf-coalgebra on the set of atomic propositions in the program. The corresponding C(PfPf)-coalgebra, where C(PfPf) is the cofree comonad on PfPf describes derivations by resolution. Using lax semantics, that correspondence may be extended to a class of first-order logic programs without existential variables. The resulting extension captures proofs by term-matching resolution in logic programming. Refining the lax approach, we further extend it to arbitrary logic programs.

 

8 March

John Gowers (University of Bath)
Games with ordinal sequences of moves

I shall present a modification of the Abramsky-Jagadeesan games model to allow sequences of moves indexed by transfinite ordinals.  The motivation for this construction is work arising from work by Laird and Churchill in [1,2] concerning the sequoid operator.  In [2], the authors construct an exponential in the category of games that is a cofree commutative comonoid for the 'tensor on the left' functor and a final coalgebra for the 'sequoid on the left' functor.  In the category of finite games and strategies, this exponential can be constructed from the sequoid functor as the limit of a diagram indexed by the ordinal w.  If we try to extend this result to the 'sequoidal categories' introduced in [1], then we find that this construction does not always produce a final coalgebra, but that for natural categories of games a similar construction using a higher ordinal will work.  If the lengths of plays in an ordinal game can be bounded by a limit ordinal k then we may construct the final coalgebra for the sequoid using a suitable diagram indexed by k.  Conversely, if a game contains plays greater than an ordinal k then the limit of the natural diagram indexed by k does not have the natural structure of a coalgebra. 

There is a sizeable body of research in the field of games with plays indexed by transfinite ordinals (sometimes called 'long games').  In [3], Itay Neeman presents results concerning whether or not such games are determined.  More recently, Laird has applied a similar model to the study of unbounded determinism in [4].  The construction given in this talk is a straightforward extension of the games model outlined in [1,5].  A nice feature of the construction is that it includes as special cases both the 'games with winning condition on infinite plays', given in [5], and the pure finite games introduced by Blatt in [6]. 

After shifting focus from finite to infinite ordinals, it becomes convenient to treat plays (ordinal sequences of moves), rather than moves, as primitive, and one possible formulation is to define a game to be a sheaf of sets on some given ordinal k, where the ordinal b < k is sent to the set of legal plays of length b.  In contrast to the Abramsky-Jagadeesan model, in which moves are designated as Player-moves or Opponent-moves, we have a function that designates plays as Player-plays or Opponent-plays.  If a play is indexed by a limit ordinal, then it has no last move, so this distinction is important.  For example, in the case of games played over the ordinal w + 1, we are free to specify whether a play of length w should belong to Player or to Opponent, and this corresponds exactly to choosing a set of infinite plays that are Player-winning, as in [5]. 

I shall outline the motivation and construction for games played over transfinite ordinals, and shall discuss briefly some tentative questions about links to ordinals occurring elsewhere in the theory - in particular, game-value ordinals for winning positions and consistency-strength ordinals in proof theory.

[1]: Martin Churchill, James Laird and Guy McCusker.  Imperative programs as proofs via game semantics.  LICS 2011: 65-74, June 2011

[2]: James Laird.  A Categorical Semantics of Higher Order Store, CTCS 2002.  Proceedings of CTCS '02, Elsevier, 2002

[3]: Itay Neeman.  The Determinacy of Long Games.  De Gruyter Series in Logic and its Applications, 1972

[4]: James Laird.  Sequential Algorithms for Unbounded Nondeterminism.  MFPS XXXI: 271-287, 2015

[5]: Samson Abramsky, Radha Jagadeesan.  Games and full completeness for multiplicative linear logic.  Journal of Symbolic Logic 59 (02): 543-574, 1994

[6]: A. Blass.  A game semantics for linear logic.  Annals of Pure and Applied Logic, 56(1-3): 183 - 220, 1992


1 March

Willem Heijltjes (University of Bath)
Proof Nets and Complexity

In this talk I will give an overview of some recent and some very recent developments in linear logic proof nets.


2015

8 December

Giuseppe Primiero (Middlesex University)
SecureND: Natural Deduction for Secure Trust

Applications in computational domains complement verified knowledge with information sharing processes. From a logical viewpoint, formulating assertion operations in terms of a trust function is challenging, both conceptually and technically. In this talk we overview SecureND, a natural deduction calculus for knowledge derivation under trust. Its design is motivated by the problem of trust transitivity. We present also its implementation as the Coq protocol SecureNDC, to deal with trusted sources in software management systems. We conclude with an overview of current and future extensions of our language.

 

1 December

Andrea Aler Tubella (University of Bath)
A generalised cut-elimination procedure through subatomic logic

Through subatomic logic we are able to present sufficient conditions for a proof system to enjoy cut-elimination. In this talk I will present subatomic logic, how it enables us to present proof systems that have single (linear) rule scheme and a recent result: we can generalise the splitting procedure for cut-elimination to any proof system whose rules and connectives have certain properties.

 

17 & 24 November

John Power (University of Bath)
Lawvere Theories

I plan to give two talks about Lawvere theories. These are not for experts but rather to give the details. Lawvere introduced the notion in his PhD thesis in 1963, providing not only a category theoretic account of universal algebra but one that is also presentation-independent. Remarkably, his definition was embraced, albeit with a caveat and in different terms, by universal algebraists but not by category theorists. The latter, from 1966, generally preferred to model universal algebra owing to a little more generality but at a very considerable cost. Computer scientists were then influenced to adopt monads, but much could and has been gained by recasting some of the latter's concerns in terms of Lawvere theories. Ultimately, I think Lawvere theories are a superior approach, but benefit very much from the relationship with monads, and I duly plan to explain it.

 

27 October

Guillaume Munch-Maccagnoni (University of Cambridge)
Polarised realizability structures, models, and depolarisation

Polarisation describes the presence of an evaluation order, and is characterised denotationally by a non-associativity of compositions. We recently proposed a polarised, Curry-style approach to the λ-calculus with extensional sums, in correspondence with polarised intuitionistic logic. We suggested that associativity of composition in this context should not be seen as a syntactic axiom, but as an emergent property akin to termination. Traditionally, issues with sums in denotational semantics have rather been considered to be with extensionality than with the associativity. This will be explained in an introductory fashion in a first part.

In a second part, I will more formally relate the termination in the λ-calculus with sums to depolarisation, i.e. associativity of composition, or more familiarly the fact that the order of evaluation does not matter. First, a general setting of polarised realizability structures for polarised calculi with or without control operators is developed. Then, a general technique to build observational models from these structures is explained. Finally, under broad conditions, the observational models that the non-associative syntactic structure gives rise to satisfy the associativity of composition (and are therefore cartesian closed categories with binary co-products). I will sketch an analogy between intuitionistic depolarisation and parametricity.

 

20 October

Matthijs Vákár (University of Oxford)
Game Semantics for Dependent Types

Game semantics can act as a unifying semantic framework, providing compelling models for a strikingly wide range of programming languages, type theories and logics. A notable exception has been dependent type theory, which had so far defied a game theoretic description. We present a proposal to fill this gap in the form of a new categorical model of dependent type theory, based on a category of games and history-free winning strategies. We model dependent type theory with 1-, Sigma-, Pi- and intensional Id-types as well as finite inductive type families (which act as ground types, like calendars). We discuss the place of the Id-types in the intensionality spectrum as well as the strong completeness properties the model satisfies.

Most of the talk should be understandable without prior knowledge of game semantics and dependent type theory.

 

15 October

Ugo dal Lago (Universitá di Bologna)
Higher-Order Probabilistic Computation: Calculi, Observational Equivalence, and Implicit Complexity

Probabilistic models are more and more pervasive in computer science, and randomized algorithms are the ones offering the best performances in many domains. Higher-order probabilistic computation – in which a probabilistic function may be passed as a parameter and returned as a result – is on the other hand a relatively underdeveloped field, which is however receiving more and more attention. We give a survey of what is known about probabilistic lambda-calculi, later focusing on some of our recent results on implicit complexity and on inductive and coinductive techniques for program equivalence. Finally, we hint at how all this could be useful when structuring proofs of security for cryptographic primitives, but also when expressing probabilistic models in the context of machine learning.

 

10 June

Willem Heijltjes (University of Bath)
Complexity Bounds for Sum-Product Logic via Additive Proof Nets and Petri Nets

This is joint work with Dominic Hughes. We investigate efficient algorithms for the additive fragment of linear logic. This logic is an internal language for categories with finite sums and products, and describes concurrent two-player games of finite choice. In the context of session types, typing disciplines for communication along channels, the logic describes the communication of finite choice along a single channel.

We give a simple linear time correctness criterion for unit-free propositional additive proof nets via a natural construction on Petri nets. This is an essential ingredient to linear time complexity of the combinatorial proofs for classical logic by Dominic Hughes.

For full propositional additive linear logic, including the units, we give a proof search algorithm that is linear-time in the product of the source and target formula, and an algorithm for proof net correctness that is of the same time complexity. We prove that proof search in first-order additive linear logic is NP-complete.

 

2 June

Anupam Das (ENS Lyon)
A complete axiomatisation of MSOL on infinite trees.

We show that an adaptation of Peano's axioms for second-order arithmetic to the language of monadic second-order logic (MSOL) completely axiomatises the associated theory (SkS) over infinite trees. This continues a line of work begun by Büchi and Siefkes with axiomatisations of MSOL over various classes of linear orders. Our proof formalises, in the axiomatisation, a translation of MSO formulas to alternating parity tree automata. The main ingredient is the formalised proof of positional determinacy for the corresponding parity games which, as usual, allows us to complement automata and to deal with the negation of MSO formulas. The Comprehension Scheme of MSOL is used to obtain uniform winning strategies, whereas most usual proofs of positional determinacy rely on instances of the Axiom of Choice or transfinite induction. (Consequently we obtain an alternative decision procedure for MSOL over infinite trees, via proof search, that remains entirely internal to the language.)

This talk is based on joint work with Colin Riba that will be presented at LICS '15.

 

12 May

Georg Struth (University of Sheffield)
Completeness Theorems for Bi-Kleene Algebras and Series-Parallel Rational Pomset Languages

Pomsets form a standard model of true concurrency. In this lecture I present a completeness result for a class of pomset languages, which generalises the regular languages to the realm of concurrency. More precisely I show that the congruence on series-parallel rational pomset expressions generated by series-parallel rational pomset language identity is axiomatised by the axioms of Kleene algebra plus those of commutative Kleene algebra. A decision procedure is extracted from this proof. On the way to this result, series-parallel rational pomset languages are proved to be closed under the operations of co-Heyting algebras and homomorphisms. These results form a significant step towards a decision procedure for the equational theory of concurrent Kleene algebras, which have recently been proposed for concurrency verification (joint work with Michael Laurence).

 

5 May

James Brotherston (University College London)
Parametric completeness for separation theories (via hybrid logic).

In this talk, we consider the logical gap between the following two concepts:

(1) provability in a propositional axiomatisation of separation logic, which is usually given by the bunched logic BBI; and

(2) validity in an intended class of models of separation logic, as commonly considered in its program verification applications. Such intended classes are usually specified by a collection of algebraic axioms describing specific model properties, which we call a separation theory.

Here, we show first that several typical properties of separation theories are in fact not definable in BBI. Then, we show that these properties become definable in a natural hybrid extension of BBI, obtained by adding a theory of naming to BBI in the same way that hybrid logic extends normal modal logic. Finally, we show how to build an axiomatic proof system for our hybrid logic in such a way that adding any axiom of a certain form yields a sound and complete proof system with respect to the models satisfying those axioms. In particular, this yields sound and complete proof systems for any separation theory from our considered class (which, to the best of our knowledge, includes all those appearing in the literature).

This is joint work with Jules Villard, now at Facebook.

 

28 April

Guilhem Jaber (Queen Mary)
Reasoning on Equivalence of Stateful Programs with Operational Game Semantics

Contextual equivalence of programs written in a functional language with references (i.e. local mutable states) is a notoriously hard problem, specially with higher-order references (i.e. references which can store functions). In the last twenty years, different techniques have been introduce to that purpose: Kripke Logical Relations, Bisimulations and Algorithmic Game Semantics.

In this talk, we will see how to use operational game semantics, namely the trace semantics for a language with references introduced by Laird, to build a new technique, Kripke Open Bisimulations, to reason on equivalence of programs, taking the best of the previous methods. This technique is simple enough to be mostly automatized: it becomes possible to model-check equivalence of programs.

If time permits, we will see how to extend this technique to polymorphism.















































































































  • No labels