The dominant theme in all three seminars was the gap between syntax and semantics, conceived as site for conceptual and mathematical invention rather than as a call to mimicry. This took us from a study of Badiou's

*Concept of Model*(read in the light of Althusser's critique of the "mirror myth of knowledge" in the introduction to

*Reading Capital*, and as a more or less successful attempt to elaborate and refine this critique in order to intervene against the empiricist use of the logico-mathematical theory of models), a case study of the Löwenheim-Skolem theorem (the first really significant theorem dealing with the concept of model, and also the one which opened the abyss between syntax and semantics, and produced the concept of "non-standard" models), in Seminar 1, to a examination, in Seminar 2, of Badiou's category of "forcing", focussing on how it draws on the two mathematical "conditions" of Robinson's method for producing non-standard models for analysis (as analysed in Badiou's early text, "La Subversion infinitesimale"), and, of course, Paul Cohen's "forcing" technique in the controlled production of models for set theory, which was the key condition for the theory of truth and subjectivity in

*Being and Event*. Seminar 3 dealt with Girard's work, and its conceptual and historical context. We looked at his critique of Tarski-style semantics, and his more or less tacit philosophical concepts of blind spot and of logic as essentially 'productive' -- themes which helped to link this material back to Althusser, etc. The shift from a set-theoretic paradigm to a procedural one -- the neglect of which would make Girard's project almost impossible to understand -- was examined in terms of the move from a set-theoretic conception of functions to a conception informed by the lambda calculus. A bit of time, not much, was spent on the parallel difference between Tarskian and denotational semantics (where what is modelled is the dynamics of proofs, not mere 'provability' -- the notion which semantic 'truth' roughly and imperfectly captures, in a garment which leaves nothing to the imagination and yet is far too bulky). We moved from there through the the sequent calculi and the Curry-Howard isomorphism (the isomorphism between proofs (in the sequent calculus, or natural deduction systems) and programmes (in the lambda calculus, the Turing machine formalism, or any actual computer), and so on, to

*linear logic*, looking at the subtle tensions which emerge in our understanding of logic as we direct our attention to its hidden symmetries, its procedural aspects, etc. Another important conceptual distinction that we dealt with was that between "typed" and "untyped" systems -- focussing again on the lambda calculus, but with the intention of asking (in light of the Curry-Howard isomorphism) what an

*untyped logic*might look like.

*Explanation, by way of example*: in the untyped lambda calculus, every lambda term -- every programme or function -- can interact with every other lambda term, even if the result is a non-terminating procedure (a 'crash') -- "plus 1", for example, can act not only on the numerals for which it was designed, but even on functions which have nothing to do with numbers. The result's not always pretty, but

*something*always happens. The untyped lambda universe is a wild world, and this leads to some very strange facts -- such as every function possessing a fixed point (for all F there exists an X such that F(X) = X) even if this fixed point is monstrous. In the typed lambda calculus, by contrast, everything is domesticated: the functions are saddled with a "superegoic" apparatus of types (Girard's metaphor, I think, if not Joinet's) which limits interaction, and allows terms to act only upon terms of the appropriate "type". The upshot is that every function eventually "terminates" or reaches "normal form" -- nothing crashes -- in the typed calculus, but the control by which this peace is won seems a bit artificial, or at least superficial, and doesn't really seem to proceed from the deeper structure of the calculus.)

[ADDENDUM:

*What is a 'type' in logic, you ask? A type is the name of a proposition. "A & B", for example, is a proposition of type A&B. The Curry-Howard isomorphism maps proofs to programmes, and propositions to*types

*of programmes. So the question, "What would an untyped logic look like?" becomes something like "Can we do logic without casting our propositions in types prior to the demonstrative work that explicates them and tests them for consequences?" Can we have a logic where we don't begin with a battery of atomic sentences and pre-fabricated connectives? That's the gist of it.*]

Finally, we looked at ludics, which is just such a logic (an untyped logic, that is), and which in Girard's eyes succeeds in sublating the gap between syntax and semantics. This section was pretty much improvised. I'll try and write something more precise about and thorough it soon, and post it here. [ADDENDUM:

*For now, I'll just say: cut-elimination, the algorithmic procedure by which appeals to lemmas are eliminated from a proof, rendering the proof wholly explicit, without 'subroutines', is the key. Cut-elimination is always possible for classical logic, always yields a*

*unique result for intuitionistic and linear logic, but only in ludics does the dynamic of cut-elimination find its full scope, becoming the real engine of the entire system. In 'pre-ludic' logics, many characteristically 'semantic' properties can be expressed in terms of syntactic properties of cut-free proofs. Ludic 'interaction' -- a generalized form of cut-elimination -- reaches into crannies that ordinary cut-elimination can't.*] In the meantime, curious readers can find some of my rough sketches of this subject matter (in English this time) here and here.

I'm happy to say that the seminars went extremely well, better than I could have hoped. I'm incredibly grateful for the boundless hospitality and generosity of Carlos Gomez, the Lacanian psychoanalyst who not only, through some incomprehensible faculty of persuasion, convinced the Department of Mathematics and Physics to invite me to come give the seminars, but ensured that my wife and I received full royal treatment while in the city. (And what a city!)

The participants in the seminars were few, but brilliant, and I left with several loose threads which I hope to follow up soon in my research. Among the most interesting of these concerned the sense that should be read into Girard's project for a "transcendental syntax" -- of which

*ludics*is just one adumbration -- with one participant, named Cristina, pointing out that this sounds like Deleuze's conception of the transcendental more than anything (productive of what it conditions, untyped or 'wild', not already sorted into kinds, not resembling the conditioned -- unlike the Tarskian "meta"). This is something I'll have to look at more closely, so, readers, where should I start for a clear treatment of Deleuze's concept of the transcendental? Deleuze has always been someone I've liked quite a bit, but who I've read more or less casually. I'm thinking that

*Difference and Repetition*would be the key text on this topic, but I welcome other suggestions.

Luke, thanks a lot for the Seminars, I enjoyed them a lot and they opened up a lot of questions for me. Hope to see you sometime soon. Thanks again, Christina

ReplyDeleteI'm really happy to hear that, Christina. Thanks for your constant participation in all eight hours of them! I'm going to start giving more thought to Girard's concept of the transcendental and its relation to Deleuze's --- this is something we'll have to talk about again.

ReplyDeleteWhen are you headed to Cornell? Do you begin there this September? Say hi to the McNultys for me.

I would love to talk about that again, and with more grounds to say it. I will be heading to Cornell on August. I hope we will keep on touch.

ReplyDeleteThanks again!