§1 Introduction
Residuated structures play an important rôle in abstract algebra and substructural logic (Krull, 1924; Ward & Dilworth, 1939; Ono, 1993; Jipsen & Tsinakis, 2002; Galatos et al., 2007; Abramsky & Tzevelekos, 2010). We introduce the Lambek calculus with the unit (Lambek, 1969) as the algebraic logic (inequational theory) of residuated monoids. A residuated monoid is a partially ordered algebraic structure , where:

is a monoid;

is a preorder;

and are residuals of w.r.t. , i.e.
Notice that in the presence of residuals we do not need to postulate monotonicity of w.r.t. explicitly: it follows from the rules for and (Lambek, 1958).
The Lambek calculus with the unit, , axiomatises the set of atomic sentences of the form (where and are formulae constructed from variables and constant using three binary operations: , , ) which are generally true in residuated monoids.
We formulate in the form of a Gentzenstyle sequent calculus. Formulae of are built from a countable set of variables and constant using three binary connectives: , , . Sequents are expressions of the form , where (succedent) is a formula and (antecedent) is a finite sequence of formulae. The antecedent is allowed to be empty; the empty sequence of formulae is denoted by .
Sequent are interpreted as ; sequent means .
Axioms of are sequents of the form , and . Rules of inference are as follows:
Completeness is proved by standard Lindenbaum – Tarski construction.
One of the natural examples of residuated monoids is the algebra of formal languages over an alphabet . The preorder on is the subset relation; multiplication is pairwise concatenation:
and divisions are defined as follows:
This interpretation of the Lambek calculus on formal languages corresponds to the original idea of Lambek (1958) to use the Lambek calculus as a basis for categorial grammars. The concept of categorial grammars goes back to Ajdukiewicz (1935) and BarHillel (1953). Nowadays categorial grammars based on the Lambek calculus and its extensions are used to describe fragments of natural language in the typelogical linguistic paradigm. In this article we need Lambek categorial grammars only as a technical gadget for our complexity proofs. Thus, in §3 we give only formal definitions and formulate the results we need; for indepth discussion of linguistic applications we redirect the reader to Carpenter (1998), Morrill (2011), and Moot & Retoré (2012).
The notion of residuated monoid can be extended by additional algebraic operations. These include meet () and join (), which impose a lattice structure on the given preorder: , , and we postulate that these suprema and infima exist for any . A residuated monoid with meets and joins is a residuated lattice. On the algebra of formal languages, meet and join correspond to settheoretic intersection and union respectively.
On the logical side, meet and join are called, respectively, additive conjunction and disjunction. This terminology follows Girard’s linear logic (Girard, 1987). The monoidal product, , is multiplicative conjunction, and two divisions, and , are left and right linear implications. The rules for and are as follows:
The system presented above is substructural, that is, it lacks usual logical principles of weakening, contraction, and permutation. For this reason, logical connectives split, and we have to consider two implications (left and right) and two versions of conjunction (multiplicative and additive). For more discussion of substructurality we refer to Restall (2000).
Another, more sophisticated operation to be added to residuated monoids is iteration, or Kleene star, first introduced by Kleene (1956). Residuated lattices extended with Kleene star are called residuated Kleene lattices (RKLs), or action lattices (Pratt, 1991; Kozen, 1994b). Throughout this article, we consider only *continuous RKLs, in which Kleene star is defined as follows:
(where , times, and ).^{1}^{1}1In the presence of divisions, the usual definition of *continuity, , can be simplified by removing the context , . In particular, the definition of a *continuous RKL postulates existence of all these suprema.
Axioms and rules for Kleene star naturally come from its definition:
Here means , times; .
The left rule for Kleene star is an rule, i.e., has countably many premises. In the presence of rule, the notion of derivation should be formulated more accurately. A valid derivation is allowed to be infinite, but still should be wellfounded, that is, should not include infinite paths. Thus, the set of theorems of is the smallest set of sequents which includes all axioms and is closed under application of rules.
Notice that meets and joins are not necessary for defining the Kleene star; thus we can consider residuated monoids with iteration.
The logic presented above axiomatises the inequational theory of *continous RKLs. It is called infinitary action logic (Buszkowski & Palka, 2008) and denoted by . Syntactically, the set of theorems of is the smallest set that includes all axioms and is closed under inference rules presented above; completeness is again by Lindenbaum – Tarski construction.
Palka (2007) proved cut elimination for . As usual, cut elimination yields subformula property (if a formula appears somewhere in a cutfree derivation, then it is a subformula of the goal sequent). Therefore, elementary fragments of with restricted sets of connectives are obtained by simply taking the corresponding subsets of inference rules. These fragments are denoted by listing the connectives in parentheses after the name of the calculus, like , for example.
Some of these fragments also have their specific names: is the Lambek calculus with the unit and iteration and is denoted by ; its fragment without the unit, , is denoted by . The multiplicativeadditive Lambek calculus, , is . The Lambek calculus with the unit, , is .
Finally, is the Lambek calculus allowing empty antecedents, but without the unit constant. This calculus is usually denoted by (Lambek, 1961). Unfortunately, this yields a notation clash with the Kleene star, which is also denoted by . Just “” is reserved for the system with Lambek’s restriction (see the next section). Therefore, we introduce a new name for this calculus: , which means that empty antecedents are allowed in this calculus. By we denote the productfree fragment of (which is the same as ).
Buszkowski (2007) and Palka (2007) show that the derivability problem in is complete—in particular, the set of all theorems of is not recursively enumerable. Thus, the usage of an infinitary proof system becomes inevitable. Buszkowski (2007) also shows hardness for fragments of where one of the additive connectives ( or , but not both) is removed. Complexity of the Lambek calculus with Kleene star, but without both additives, i.e., the logic of residuated monoids with iteration, which we denote by , however, is left by Buszkowski as an open problem. In this article we prove hardness of this problem (the upper bound is inherited by conservativity).
The rest of this article is organised as follows. In §2 we discuss Lambek’s restriction and how to prove the desired complexity result without this restriction imposed. In §3 we survey results on connections between Lambek grammars and contextfree ones. The main part of the article is contained in §4 and §5. In §4, we represent complexity results by Buszkowski and Palka for and show how to strengthen the lower bound and prove completeness for the system without and . This complexity proof uses the version of Safiullin’s theorem for the Lambek calculus without Lambek’s restriction. We prove it in §5, which is the most technically hard section. Finally, §6 and §7 contain some final remarks and discussions.
§2 Lambek’s Restriction
A preliminary version of this article was presented at WoLLIC 2017 and published in its lecture notes (Kuznetsov, 2017). In the WoLLIC paper, we show completeness of a system closely related to —namely, the logic of residuated semigroups with positive iteration, denoted by . From the logical point of view, in there is no unit constant, and there should be always something on the lefthand side.
This constraint is called Lambek’s restriction. In the presence of Lambek’s restriction one cannot add Kleene star, and it gets replaced by positive iteration, or “Kleene plus,” with the following rules:
In the setting without Lambek’s restriction Kleene plus is also available, being expressible in terms of Kleene star: .
Lambek’s restriction was imposed on the calculus in Lambek’s original paper (Lambek, 1958) and is motivated linguistically (Moot & Retoré, 2012, Sect. 2.5). Unfortunately, there are no conservativity relations between and . For example, is derivable in (thus in ), but not in (thus not in ), though the antecedent here is not empty. Hence, hardness of the latter is not obtained automatically as a corollary.
Moreover, the proof of hardness of crucially depends on the following result by Safiullin (2007): any contextfree language without the empty word can be generated by a Lambek grammar with unique type assignment, and Safiullin’s proof essentially uses Lambek’s restriction. In this article, we feature a new result, namely, completeness of itself. We modify Safiullin’s construction and extend his result to (which is already interesting on its own). Next, we use this new result in order to prove hardness of .
§3 Lambek Grammars and ContextFree Grammars
In this section we introduce Lambek categorial grammars and formulate equivalence results connecting them with a more widely known formalism, contextfree grammars. The notion of Lambek grammar is defined as follows: A Lambek grammar over an alphabet is a triple , where is a designated Lambek formula, called goal type, and is a finite binary correspondence between letters of and Lambek formulae.
A word over is accepted by a grammar , if there exist formulae , …, such that () and the sequent is derivable in the Lambek calculus.
The language generated by grammar is the set of all words accepted by this grammar.
In view of the previous section, one should distinguish grammars with and without Lambek’s restriction: the same grammar could generate different languages, depending on whether Lambek’s restriction is imposed or not.
We also recall the more wellknown notion of contextfree grammars.
A contextfree grammar over alphabet is a quadruple , where is an auxiliary alphabet of nonterminal symbols, not intersecting with , is a designated nonterminal symbol called the starting symbol, and is a finite set of production rules. Production rules are written in the form , where is a nonterminal symbol and is a word (possibly empty) over alphabet . Production rules with an empty (of the form )^{2}^{2}2We use for the empty sequence of formulae and for the empty word over an alphabet. are called rules.
A word over is immediately derivable from in contextfree grammar (notation: ), if . The derivability relation is the reflexivetransitive closure of .
The language generated by contextfree grammar is the set of all words over (i.e., without nonterminals) which are derivable from the starting symbol : . Such languages are called contextfree.
Two grammars (for example, a Lambek grammar and a contextfree one) are called equivalent, if they generate the same language.
Buszkowski’s proof of hardness of uses the following translation of contextfree grammars into Lambek grammars: [C. Gaifman, W. Buszkowski] Any contextfree grammar without rules can be algorithmically transformed into an equivalent Lambek grammar, no matter with or without Lambek’s restriction.
This theorem was proved by Gaifman (BarHillel et al., 1960), but with basic categorial grammars instead of Lambek grammars. Buszkowski (1985) noticed that Gaifman’s construction works for Lambek grammars also.
The reverse translation is also available (in this article we do not need it): [M. Pentus] Any Lambek grammar, no matter with or without Lambek’s restriction, can be algorithmically transformed into an equivalent contextfree grammar. (Pentus, 1993)
For our purposes we shall need a refined version of Theorem §3. A Lambek grammar is a grammar with unique type assignment, if for any there exists exactly one formula such that .
[A. Safiullin] Any contextfree grammar without rules can be algorithmically transformed into an equivalent Lambek grammar with unique type assignment with Lambek’s restriction. (Safiullin, 2007)
Notice that Theorem §3, as formulated and proved by Safiullin, needs Lambek’s restriction. If one applied Safiullin’s transformation and then abolished Lambek’s restriction, the resulting grammar could generate a different language, and the new grammar would not be equivalent to the original contextfree one.
§4 Complexity of the Infinitary Calculi with and without Additives
We start with the known results on algorithmic complexity of , infinitary action logic with additive connectives.
[W. Buszkowski and E. Palka] The derivability problem in is complete. (Buszkowski, 2007; Palka, 2007)
An algorithmic decision problem, presented as a set , belongs to , if there exists a decidable set of pairs , such that if and only if for all . The complexity class is dual to , the class of recursively enumerable sets^{3}^{3}3We suppose that sequents of , as well as contextfree grammars, are encoded as words over a fixed finite alphabet, or as natural numbers.: a set is if and only if its complement is recursively enumerable. The “most complex” sets in are called complete sets: a set is complete, if (1) it belongs to (upper bound); (2) it is hard, that is, any other set is mreducible to (lower bound). The latter means that there exists a computable function such that iff . By duality, a set is complete iff its complement is
complete. For example, since the halting problem for Turing machines is
complete, the nonhalting problem is complete.Notice that a complete set cannot belong to : otherwise it would be decidable by Post’s theorem, and this would lead to decidability of all sets in , which is not the case. Thus, Theorem §4 implies the fact that the set of sequents provable in is not recursively enumerable, and itself cannot be reformulated as a system with finite derivations.
Division operations ( and ) are essential for the lower complexity bound. For the fragment a famous result by Kozen (1994a) provides completeness of an inductive axiomatization for Kleene star and establishes PSPACE complexity. As we prove in this article, however, the fragment with divisions and without additives, , is still complete.
The upper bound in Theorem §4 was proved by Palka (2007) using the following *elimination technique. For each sequent define its th approximation by replacing all negative occurrences of by . Formally this is done by the following mutually recursive definitions:
Now the th approximation of a sequent can be defined as . Palka (2007) proves that a sequent is derivable in if and only if all its approximations are derivable. In a cutfree derivation of an approximation, however, the rule could never be applied, since there are no more negative occurrences of formulae. Proof search without the rule is decidable. Thus, we get upper bound for . By conservativity, this upper bound is valid for all elementary fragments of , in particular, for .
For the lower bound (hardness), Buszkowski (2007) presents a reduction of a wellknown complete problem, the totality problem for contextfree grammars, to derivability in . Buszkowski’s reduction is as follows. Let total denote the following algorithmic problem: given a contextfree grammar without rules, determine whether it generates all nonempty words over . It is widely known that total is complete (Sipser, 2012; Du & Ko, 2001). The reduction of this problem to derivability in is performed as follows: given a contextfree grammar, transform it (by Theorem §3) into an equivalent Lambek grammar . Suppose that and for each the formulae that are in correspondence with are . Let () and . Then the grammar generates all nonempty words if and only if is derivable in (Buszkowski, 2007, Lm. 5). Recall that is . Thus, we have a reduction of total to the derivability problem in , and therefore the latter is hard.
This construction essentially uses additive connectives, and . As shown by Buszkowski (2007), one can easily get rid of by the following trick. First, notice that total is already hard if we consider only languages over a twoletter alphabet (Du & Ko, 2001). Second, can be equivalently replaced by (this equivalence is a law of Kleene algebra, thus provable in ). The sequent can be now replaced by a sequent without by the following chain of equivalent sequents:
(The last step is due to the equivalence , which is provable in and thus in .)
Buszkowski also shows how to prove hardness for without —but then becomes irremovable. We show how to get rid of both and at once.
The derivability problem in is complete.
The upper bound follows from Palka’s result by conservativity.
For the lower bound (hardness) we use the following alternation problem, denoted by alt, instead of total. A contextfree grammar over a twoletter alphabet belongs to alt, if the language it generates includes all words beginning with and ending with . (Other words can also belong to this language.) The alternation problem alt is also hard, by the following reduction from total: take a contextfree grammar with starting symbol and append a new starting symbol with rules and ; the new grammar belongs to alt if and only if the original grammar belongs to total.
Now we translate our contextfree grammar to a Lambek grammar, as Buszkowski does. It is easy to see that the grammar belongs to alt if and only if the sequent is derivable in : The sequent is derivable in if and only if so are all sequents for any , , , , , …, , . By cut, one can easily establish that the rule is invertible:
and so is :
Now the “if” part goes by direct application of and and the “only if” one by their inversion.
Now our proof of Theorem §4 will be finished if we manage to formulate and without . Recall that , where are the formulae which are in the correspondence with . For a Lambek grammar with unique type assignment we have , and does not contain . Thus, Theorem §4 now follows from the fact that any contextfree grammar without rules can be equivalently transformed to a Lambek grammar with unique type assignment, without Lambek’s restriction. Indeed, our language belongs to alt if and only if any word of the form , for arbitrary , , , , , …, , , belong to the language. By definition of Lambek grammar with unique type assignment, this happens exactly when all sequents are derivable in , and, by conservativity, in .
§5 Safiullin’s Construction for
In this section, we consider only the productfree fragment . By “ ” we mean that is derivable in this calculus. By an grammar we mean a Lambek grammar without Lambek’s restriction, where all formulae do not include the multiplication operation.
Any contextfree grammar without rules can be algorithmically transformed to an equivalent grammar with unique type assignment.
Before presenting the construction of the grammar itself, we introduce Safiullin’s technique of proof analysis for the productfree Lambek calculus, adapted for the case without Lambek’s restriction. This technique has something in common with proof nets and focusing for noncommutative linear logic; however, due to the simplicity of , this technique works directly with Gentzenstyle cutfree derivations.
We start with a simple and wellknown fact:
The and rules are reversible, i.e., if , then , and if , then .
Derivability is established using cut (and then, if we want a cutfree derivation, eliminate cut).
For any formula let its top be a variable occurrence defined recursively as follows:

the top of a variable is this variable occurrence itself;

the top of and, symmetrically, of is the top of .
For convenience we consider cutfree derivations with axioms of the form , where is a variable. Axioms with complex formulae are derivable (induction on ).
In a cutfree derivation of , the principal occurrence^{4}^{4}4In proof nets, the principal occurrence is the one connected by an axiom link to the in the succedent. of in is the one that comes from the same axiom as the in the succedent. We denote the principal occurrence by .
Notice that the notion of principal occurrence depends on a concrete derivation, not just on the fact of derivability of .
The principal occurrence is always a top.
Induction on derivation.
Using this lemma, one can locate possible principal occurrences by searching for tops with the same variable as the succedent (which has been reduced to its top variable by Lemma §5).
Introduce the following shortcut for curried passing of denominators: if and are sequences of formulae, then let
( associates to the left and associates to the right). If or is empty, we just omit the corresponding divisions:
(Since and are equivalent, we do not need parentheses here.)
Next follows the decomposition lemma, which allows reverseengineering of and , once we have located the principal occurrence.
If , then (some of may be empty), for any ; (some of may be empty), for any .
Induction on derivation. By definition of principal occurrence, always goes to the right branch in applications of and .
In particular, Lemma §5 yields the following corollary: if the principal occurrence is the top of a formula of the form , then it should be the leftmost formula in the antecedent.
The next ingredient of Safiullin’s construction is the sentinel formula. This formula is used to delimit parts of the sequent and force them to behave independently. Safiullin uses a very simple formula, , as a sentinel. Without Lambek’s restriction, however, this will not work, because is now derivable, making the sentinel practically useless. We use a more complicated sentinel, using the technique of raising. A formula raised using is . Our sentinel is as follows:
Variables , , and are parameters of the sentinel. We shall take fresh variables for them.
The top of is . In the notation of Lemma §5, , thus, it is of the form .
The following lemma trivialises the analysis of sequents of the form , if the principal occurrence happens to be the top of the leftmost .
If does not have or in tops (in particular, could include , since the top of the sentinel is ) and
then .
By Lemma §5,
Inverting in the first sequent yields . If does not include , then the only principal occurrence is the in , and by Lemma §5 , which contradicts Lemma §5: there are no tops in (which is a subsequence of ).
On the other hand, , since . Thus, and
In the first case, by Lemma §5, q.e.d. In the second case, fails, since there is no top in the antecedent.
If , , …, do not have , , or in tops and
then and all are empty. (In other words, the only derivable sequent of this form is the trivial .) In particular, and .
Inverting by Lemma §5, we get
Let us locate the principal occurrence of . This should be a top, therefore it is either the rightmost in or in one of the sentinels. In the first case we get, by Lemma §5,
which immediately fails to be derivable, since there is no top in the antecedent to become the principal occurrence.
In the second case, since all sentinels are of the form , the principal occurrence should be the leftmost one. Thus, and the principal occurrence is the top of the leftmost . (In particular, at least one sentinel should exist, i.e., .)
Now by Lemma §5 should be empty.
Next, we are going to need the construction of joining formulae (Pentus, 1994). A joining formula for is a formula such that is derivable for all . The joining formula is a substitute for in the language without and . One can also consider a joining formula for a set of sequences of formulae as such that is derivable for all . Due to the substructural nature of the Lambek calculus, a joining formula does not always exist. For example, two different variables, and , are not joinable. Pentus’ criterion of joinability is based on the free group interpretation of the Lambek calculus.
Let be the free group generated by ; (the empty word) is its unit. Then , the interpretation of Lambek formula in , is defined recursively as follows:

for ;

;

;

;

.
For a sequence of formulae its interpretation in is defined as follows:

if , then ;

.
One can easily see (by induction on derivation) that if is derivable, then .^{5}^{5}5The free group interpretation can be seen as a special case of the interpretation on residuated monoids, with the equality relation () taken as the preorder (). From this perspective, the fact that derivability of implies follows from the general soundness statement. Thus, if there exists a joining formula for a set of formulae (or sequences of formulae), then they should have the same free group interpretation. In fact, this gives a criterion on the existence of a joining formula. If then there exists a formula of the language of , , such that for all .
This theorem is an easy corollary of the results of Pentus (1994). Return to the calculus with multiplication, . In this calculus, for each consider its product . If , take for an arbitrary variable . Sequents are derivable. Then apply the main result from Pentus’ article (Pentus, 1994, Thm. 1) which yields a joining formula for . This formula could include the multiplication connective. However, for each formula , possibly with multiplication, there exists a formula in the language of , , such that is derivable (Pentus, 1994, Lm. 13(i)). By cut, we get .
A formula is called zerobalanced if . By Theorem §5, if all formulae in all are zerobalanced, then has a joining formula .
The sentinel formula is zerobalanced.
Raising does not change the free group interpretation of a formula:
Now
Now consider a set of zerobalance formulae and let be fresh variables, not occurring in . Consider the following two sets of sequences of formulae:
All formulae here are zerobalanced. Therefore, Theorem §5 yields a joining formula for each set: there exist formulae and (in the language of and ) such that for all we have
Let
The formula , in a sense, would play the rôle of .
We also define versions of and , for , which lack the sentinel on one edge:
For convenience, we also define . Now for any we have
for any .
By , , , is derivable from . By definition of , the latter is the same as
By construction, we have
Thus, applying and , we reduce to
This sequent is exactly , which is derivable by several applications of (recall that is a sequence of formulae).
Let be a nonempty sequence of types whose tops are not , , , , and let . Then for some we have , where is a suffix of and is a prefix of .
Inverting and gives (, , and do not have in tops). Now apply Lemma §5. The sequence gets split into , and we have , , , …, , .
Here we consider the interesting case of a nonempty . The case of is similar and is handled in Lemma §5 below. If as a whole comes into one of , , this is exactly what we want ().
In the other case, there are two possibilities: either a nonempty part of , denoted by , comes to a part , where ( is a suffix of , is a prefix of ; if is not the whole , one of them is empty), or for some we have , and parts of come to and . The latter case is impossible, since
Comments
There are no comments yet.