Game semantics of linear logic
Linear logic is a weird sort of logic. It’s most commonly explained by saying that the “weakening” rule: and the “contraction” rule . In other words - you have to use all the assumptions, and you can’t use an assumption more than once. This is usually interpreted in terms of resources - just because I can make a out of two As doesn’t mean I can do it with one . And I can’t just throw an away that I might not need - I need to find a process for getting rid of it.
However, there’s also another way of thinking about linear logics - in terms of games. The combinators of linear logics become ways of combining games into other games. The interpretation of the deduction rules is now that a proof of the sequent should provide a winning strategy for .
Let us be a bit more (but not too) precise. There are two players. Following convention, we call them the prover, P, and the opponent, O. It is understood that we are “on the side” of P, although we could equally well do it the other way - the logic is symmetric enough for that. A game has a starting player (in many treatments, it is assumed the opponent always goes first, but not here). Play is usually assumed to proceed back and forth, each player making one move at a time, but this is not important - the most important thing is that in each position, there is a well-defined next player. Each player has a number of possible moves, only some of which may be legal in a given position - by “position”, we simply mean the sequence of moves so far. We generally think of a player as losing when they have to move, but have no legal moves.
The most basic operator is the “dual” or “negation” operator, . This simply interchanges the position of the players. The next operator is . This game consist of playing and simultaneously - the player must move in whatever game the opponent just moved in, while the opponent can switch games at will.
Clearly, a winning strategy for this game, for the player, entails a winning strategy for each component game. In this sense, this game is like logical “and”.
But there’s also another operator like “and”, which is written . This is played as follows: the opponent chooses a game, then that game is played to completion, and its winner wins the whole game.
Here is a difference between these operators: There is a canonical strategy for , but not for . How does this work? To play , we must play the opponent’s side in . Assume wlog that the player goes first in . We have the player start playing in , then copy this move into . Whatever they play in response, we go to and play that as our response to them, and so on. We’re guaranteed to win (exactly) one of the games, since the two games will be exactly mirrored. If this doesn’t make sense, imagine playing two chess games against another person - one as black, one as white. Starting with the game in which they’re white, they make a move. You copy this move on the other chessboard. They respond with black, which you also copy on the fist board. When they mate you at some point on one board, you copy this immediately, winning the other game.
This strategy won’t work for . Here we would have to decide at the beginning which game we have a winning strategy in, then choose that one.
In logic, this means that the sequent is provable, but isn’t.
So far, we’ve only looked at the two forms of conjunction. There is also two forms of disjunction, given simply by , . We could restate the above discussion as “ is not provable, but $is”. This means we have a form of excluded middle, but another form of excluded middle doesn’t hold.
Now what is the meaning of a general sequent of the form , where both and are multisets. We can interpret such a sequent as asserting the existence of a strategy for the game . A proof is supposed to identify a specific strategy for the game. This also means that any sequent can be replaced by a one-sided one, which means linear logic can be expressed all in terms of one-sided sequents (which is indeed true).
Now we can think about the lack of contraction - we can examine simply the fact that is not a provable sequent.
To prove this sequent, we would want a strategy for . We can think of this again in the case where is chess. Imagine that instead of two games, there are three, two where you are black. You can only respond on whatever black board where your opponent just moved as white, or move as white on the board where you are white (forcing your opponent to respond there). And when you respond as black, your opponent can switch to the other black board. You can only mirror one of these onto the game where you’re white, so the strategy-stealing trick no longer works.
On the other hand, weakening is valid in this semantics. We do have a strategy for the game . Here you have two white boards and one black, and you can simply ignore the white board you don’t need, since you now have full control over which board you move on.
This is all written out in Blass: A game semantics for linear logic.