Institut Jean Nicod

Accueil > Séminaires/Colloques > Archives > Colloques > 2017-2018 > New Ideas in Mathematical Philosophy Paris-Stockholm Logic Meeting > Présentation


lien vers le site de l'IEC de l'ENS

New Ideas in Mathematical Philosophy

Paris-Stockholm Logic Meeting

2-3 novembre 2017

Organisateurs : Denis Bonnay (IHPST), Paul Egré (IJN)

Cet événement est financé par le programme New Ideas in Mathematical Philosophy.

THURSDAY November 2
Salle Langevin, ENS, 29 rue d’Ulm

Coffee and Welcome

Guillaume AUCHER (Rennes)
A new road towards universal logic ?

A generic logic called ‘Gaggle logic’ is introduced. It is based on Gaggle theory and deals with connectives of arbitrary arity that are related to each other by abstract laws of residuation. We list the 96 binary connectives and the 16 unary connectives of Gaggle logic. We provide a sound and complete calculus for Gaggle logic which enjoys strong cut elimination and the display property. We show that Gaggle logic is decidable and satisfies the properties of conservativity and interpolation. We also introduce specific inference rules called ‘protoanalytic’ inference rules. These rules are such that, when added to the calculus of Gaggle logic, we obtain a calculus which still enjoys strong cut elimination and the display property. If the language considered contains conjunction and disjunction, then the interpolation theorem also transfers to these extensions of Gaggle logic. In a second part of the report, we generalize the Kracht’s correspondence results established for the basic tense logic to Gaggle logic. We prove that a logic extending Gaggle logic is axiomatizable by means of so-called ‘protoanalytic’ inference rules if, and only if, the class of frames on which such a logic is based is definable by specific first-order frame conditions, also called ‘protoanalytic’. We provide algorithms that compute the corresponding protoanalytic inference rules from the protoanalytic first-order frame conditions, and vice versa. We illustrate these algorithms on well-known structural inference rules and we show in particular how we can recover classical logic from Gaggle logic by the addition of protoanalytic inference rules that refine the standard classical inference rules.


Emmanuel CHEMLA (LSCP, Paris) and Paul ÉGRÉ (IJN/ENS, Paris)
Suszko’s Problem : mixed consequence and compositionality

Suszko's problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski's structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this paper we give a more systematic perspective on Suszko's problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those to derive maximum-rank results proved recently by French and Ripley, as well as Blasio, Wansing and Marcos, in a different setting for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for non-permeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As acknowledged by Suszko, that reduction fails to preserve compositionality in general. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-compositionality and additional semantic properties.

12:00-14:00 : Lunch

John CANTWELL (KTU, Stockholm)
Making sense of (in)determinate truth: The semantics of free variables

It is argued that truth value of a sentence containing free variables in a context of use (or the truth value of the proposition it expresses in a context of use), just as the reference of the free variables concerned, depends on the assumptions and posits given by the context. However, context may under-determine the reference of a free variable and the truth value of sentences in which it occurs. It is argued that in such cases a free variable has indeterminate reference and a sentence in which it occurs may have indeterminate truth value. On letting, say, $x$ be such that $x^2=4$, the sentence `Either $x=2$ or $x=-2$' is true but the sentence `$x=2$' has an indeterminate truth value: it is determinate that the variable $x$ refers to either $2$ or $-2$, but it is indeterminate which of the two it refers to, as a result `$x=2$' has a truth value but its truth value is indeterminate. The semantic indeterminacy is analysed in a `radically' supervaluational or plurivaluational semantic framework. The analysis is contrasted with the epistemicist proposal of Breckenridge and Magidor (2012) which implies that (in the given context) `$x=2$' has a determinate but unknowable truth value.


16:00-16:30 : Coffee break

Peter PAGIN (Stockholm University)
Shifting Parameters and Propositions

As observed by David Kaplan and David Lewis, the presence of operators in a language that shift the point of evaluation creates problems for compositionality. Two sentences may have the same truth value with respect to a relevant point of evaluation, but different values at other points. Then, if we opt for a semantic value at a point as the relevant semantic value, compositionality is lost. Operators of this kind include modals (e.g. 'it is possible that'), temporal adverbs of quantification (e.g. 'sometimes, ') and locational adverbs of quantification (e.g. 'somewhere, '). One reaction is the change the semantic value, for instance to move from truth values at worlds to intensions. Kaplan extended this approach to temporal operator, opting for temporal propositions (functions from world-time pairs to extensions).

There are good reasons to preserve classical propositions as semantic values. This will involve giving up standard compositionality in favour of a more general version, and will require measures to handle the semantic contributions of indexicals.

FRIDAY November 3
Salle des Actes, ENS, 45 rue d’Ulm

Eric JOHANESSON (Stockholm University)
Simplicity, probability and Ockham's razor: an impossibility result

Ockham's razor is roughly the principle that, among two hypotheses that explain the evidence equally well, one should have a higher degree of belief in the simpler one. First we show that, on an understanding of simplicity in terms of Kolmogorov complexity (where, relative to some first order language having at least one binary predicate, the Kolmogorov complexity of an hypothesis is the length of the shortest formulation of the hypothesis in that language), it's impossible for Ockham's razor to apply to all pairs of hypotheses. It's not even possible for it to apply to all pairs of mutually exclusive hypotheses. This is our first impossibility result. Secondly, it's natural to assume that Ockham's razor doesn't get to determine the relation between the probabilities of two hypotheses without taking the simplicity of those two hypotheses into account. For instance, if Ockham's razor (but not the laws of probability themselves) forces you to have a higher degree of belief in hypothesis A than in hypothesis B, then A should also be simpler than B. However, if we also assume that our language is first order and contains arithmetic, and that Ockham's razor applies to at least two logically independent hypotheses of different complexity that are both consistent with but not entailed by some sufficiently rich theory of arithmetic (e.g. Peano arithmetic), then it can be shown that Ockham's razor is incompatible with the laws of probability. This is our main impossibility result. Hopefully, this goes some way towards explaining the difficulty of solving the problem of induction by invoking simplicity.

Francesca POGGIOLESI (CNRS, IHPST, Paris 1 Panthéon-Sorbonne)
Developing Bolzano’s intuitions: An alternative Approach to the Logic of Grounding

In this talk we will present a propositional logic for the notion of complete and immediate formal grounding, which is based on some intuitions of the great bohemian thinker Bernard Bolzano. First of all, we will introduce the five main ideas that are behind our logic of grounding; secondly, our logic having the form of a calculus, we will show the rules that compose such a calculus; finally we will briefly give some examples of derivations and illustrate the main results that can be obtained.

12:00-14:00 Lunch Break

Sebastian ENQVIST
Flat modal fixpoint logics with the converse modality

I present a generic completeness result for flat modal fi xpoint logics extended with the converse modality, building on earlier work by Santocanale and Venema. Flat modal fixpoint logics are obtained by extending modal logic with connectives definable as least fixpoints of modal formulas, and correspond to fragments of thesingle-variable modal mu calculus. I show that Santocanale and Venema's purely algebraic proof that least fixpoints in the free algebra of a flat fixpoint logic are constructive no longer works when the converse modality is added. Still, a completeness proof can be obtained in which a model for a consistent formula is constructed directly, using the induction rule in a way that is similar to the standard completeness proof for propositional dynamic logic. This approach is combined with the concept of a focus, which has previously been used in tableau based reasoning for modal fipoint logics.

Dag WESTERSTÅHL (Stockholm University)
A Carnapian approach to the meaning of logical constants: the case of modal logic

I will present ongoing work with Denis Bonnay, investigating (in the spirit of Carnap's 1943 book The Formalization of Logic) the extent to which logical consequence relations fix the meaning of the logical constants in the language, focusing on the language of basic propositional modal logic. We know from earlier work that the standard meaning of the usual connectives is fixed, also in a possible worlds setting, so the issue is the meaning of ☐. We also know exactly how classical first-order consequence constrains the meaning of ∀. But whereas there is a unique standard meaning of ∀, it is not immediately clear what the standard meaning of ☐ should be. And whereas there is essentially just one classical first-order logic, there are innumerable classical modal logics. The proper setting to discuss these issues is so-called neighborhood semantics. A (local) interpretation of ☐ is then a neighborhood frame (W; F), where F is any function from subsets of W to subsets of W. In the absence of a single obvious standard such function, one might call `standard' those interpretations that yield the usual truth clause for ☐ in relational Kripke semantics. Then one isssue is to characterize the `Kripkean' neighborhood frames; I will present some attempts and some partial results on this question, and (if there is time) on