On the Boundedness Problem for Higher-Order Pushdown Vector Addition Systems

LIPICS - Leibniz International Proceedings in Informatics, Nov 2018

Karp and Miller's algorithm is a well-known decision procedure that solves the termination and boundedness problems for vector addition systems with states (VASS), or equivalently Petri nets. This procedure was later extended to a general class of models, well-structured transition systems, and, more recently, to pushdown VASS. In this paper, we extend pushdown VASS to higher-order pushdown VASS (called HOPVASS), and we investigate whether an approach � la Karp and Miller can still be used to solve termination and boundedness. We provide a decidable characterisation of runs that can be iterated arbitrarily many times, which is the main ingredient of Karp and Miller's approach. However, the resulting Karp and Miller procedure only gives a semi-algorithm for HOPVASS. In fact, we show that coverability, termination and boundedness are all undecidable for HOPVASS, even in the restricted subcase of one counter and an order 2 stack. On the bright side, we prove that this semi-algorithm is in fact an algorithm for higher-order pushdown automata.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://drops.dagstuhl.de/opus/volltexte/2018/9943/pdf/LIPIcs-FSTTCS-2018-44.pdf

On the Boundedness Problem for Higher-Order Pushdown Vector Addition Systems

F S T T C S On the Boundedness Problem for Higher-Order Pushdown Vector Addition Systems Vincent Penelle LaBRI 0 Univ. Bordeaux 0 Bordeaux-INP 0 Talence 0 France 0 Sylvain Salvati CRIStAL 0 Univ. Lille 0 INRIA 0 Lille 0 France 0 Grégoire Sutre 0 0 LaBRI , Univ. Bordeaux, CNRS, Bordeaux-INP, Talence , France Karp and Miller's algorithm is a well-known decision procedure that solves the termination and boundedness problems for vector addition systems with states (VASS), or equivalently Petri nets. This procedure was later extended to a general class of models, well-structured transition systems, and, more recently, to pushdown VASS. In this paper, we extend pushdown VASS to higher-order pushdown VASS (called HOPVASS), and we investigate whether an approach à la Karp and Miller can still be used to solve termination and boundedness. We provide a decidable characterisation of runs that can be iterated arbitrarily many times, which is the main ingredient of Karp and Miller's approach. However, the resulting Karp and Miller procedure only gives a semi-algorithm for HOPVASS. In fact, we show that coverability, termination and boundedness are all undecidable for HOPVASS, even in the restricted subcase of one counter and an order 2 stack. On the bright side, we prove that this semi-algorithm is in fact an algorithm for higherorder pushdown automata. 2012 ACM Subject Classification Theory of computation → Formal languages and automata theory, Theory of computation → Logic and verification and phrases Higher-order pushdown automata; Vector addition systems; Boundedness problem; Termination problem; Coverability problem - Funding This work was supported by the grant ANR-17-CE40-0028 of the French National Research Agency ANR (project BRAVAS). Termination of a program is a desirable feature in computer science. As it is undecidable on Turing machines, an important challenge is to find models as expressive as possible while retaining decidability of termination. A prominent model having this property is vector addition systems with states (or VASS for short), introduced by Karp and Miller (without states) to model and analyse concurrent systems [ 10 ]. They also provide an algorithm, known as the Karp and Miller tree, which can decide termination as well as boundedness (i.e., finiteness of the set of reachable configurations). This algorithm is not optimal complexitywise, as it has an Ackermannian worst-case running time [19, 20], whereas termination and boundedness for VASS are ExpSPace-complete [17, 22]. But it is conceptually simple, and so amenable to other models. Karp and Miller’s algorithm has been extended, into a so-called reduced reachability tree, to the general class of well-structured transition systems (WSTS), to which VASS belong [6, 7]. It has also recently been applied to VASS equipped with one pushdown stack [14]. These pushdown VASS are not WSTS, thus showing that Karp and Miller’s algorithm can apply outside the realm of WSTS. A well-known extension of pushdown automata is higher-order pushdown automata (or HOPDA for short), introduced in [18, 8, 1], in which stacks are replaced by higher-order stacks – an order n stack being a stack of order (n − 1) stacks, with an order 1 stack being a classical stack, and the operations on an order n stack being the copying of the topmost order (n − 1) stack on top of it, and its inverse operation. This model is interesting for modelling because of its equivalence to safe higher-order recursion schemes [11]. Furthermore, their transition graphs are exactly the graphs of the so-called prefix-recognisable hierarchy [5, 4], which are known to enjoy decidable MSO model-checking. As all the graphs of the hierarchy enjoy the same decidable properties, a tempting conjecture to make is that what holds for models with an order 1 auxiliary stack also holds for the same models with an order n auxiliary stack, for any order n. The starting point of the present work was thus the conjecture that Karp and Miller’s algorithm would be a decision tool for termination and boundedness for higher-order pushdown VASS (or HOPVASS for short). Contribution. Our contribution in this paper is twofold. We first show that termination, and therefore boundedness, are undecidable for HOPVASS by reducing from termination of Minsky counter machines through a stepwise simulation. We also show that the coverability problem (also known as the control-state reachability problem) is undecidable as well, through the same simulation. Our undecidability results hold even in the restricted subcase of one counter and an order 2 stack. This is in sharp contrast with the same model at order 1, for which boundedness and coverability are decidable [14, 16]. We then give a decidable criterion over sequences of higher-order stack operations which characterises which ones can be applicable arbitrarily many times. The detection of such sequences is crucial for the implementation of Karp and Miller’s algorithm. Our criterion, which is decidable in quadratic time, makes Karp and Miller’s approach implementable for HOPVASS, but the resulting procedure is only a semi-algorithm. It can find witnesses of non-termination or unboundedness, but it does not terminate in general because, contrary to WSTS and order 1 pushdown VASS, there might be infinite runs that contain no iterable factor. We provide an example that illustrates this fact. More interestingly, we prove, thanks to the same iterability criterion, that our semi-algorithm always terminates on HOPDA. This means that Karp and Miller’s algorithm also applies to HOPDA, and thus provides a decision procedure that solves termination and boundedness for HOPDA. Related work and discussion. The coverability and reachability problems for order 1 pushdown VASS are inter-reducible (in logspace) and Tower-hard [12, 13]. Their decidability status is still open. The boundedness problem for the same model is decidable, and its complexity is between Tower and Hyper-Ackermann [14]. For the subcase of only one counter, coverability is decidable [16] and boundedness is solvable in exponential time [15]. The main framework for our presentation comes from the description of regular sets of higher-order stacks from Carayol presented in [ 2, 3 ]. We borrow from it the notion of reduced sequence of operations as a short description of the effect of a sequence. Our criterion for iterability is a modification of that reduction notion, in which we aim to keep the domain of definition of the sequence (which is not stable through reduction). To solve this issue, Carayol introduced test operations. Instead, we simply weaken the reduction by forbidding it to reduce destructive tests of the highest level, and considering a so-obtained weak-reduced sequence for every order. This iterability criterion is similar to the result of Parys in [21], in the sense that the underlying idea is that a sequence of operations is iterable if, and only if, it does not decrease the number of k-stack in the topmost (k − 1) stack, for every k. Otherwise, our presentation and our techniques are very different from those of Parys. To our knowledge, termination and boundedness have never been directly studied on HOPDA. However, there are several existing works from which decidability of termination and boundedness for HOPDA could be easily derived. For example, in [9], Hague, Kochems and Ong study the downward closure of languages of HOPDA, and compute it by deciding the simultaneous unboundedness problem of their languages. It follows that finiteness of the language defined by a HOPDA is decidable. Termination and boundedness are easily reducible1 to the latter problem. 2 Preliminaries Higher-order pushdown automata. We consider a finite alphabet Σ. The set of order 1 stacks (or 1-stacks) over Σ is the set Stacks1(Σ) = Σ∗. We denote a 1-stack s as s = [s1 . . . s|s|]1, where |s| is the length of s, where s|s| is the topmost letter of s. The empty stack is denoted []1. For every a ∈ Σ, we define the operations pusha which adds an a at the top of a stack, and popa which removes the topmost letter of a stack if it is an a and is not applicable otherwise. Formally, pusha and popa are partial functions from Stacks1(Σ) to Stacks1(Σ), defined by pusha([s1 · · · s|s|]1) = [s1 · · · s|s|a]1, and popa(s) = s0 if and only if pusha(s0) = s. We define the set of order 1 operations Op1(Σ) = {pusha, popa | a ∈ Σ}. When Σ is understood, we omit it (we will do it from now on). For n > 1, we define the set of order n stacks (or n-stacks) over Σ as Stacksn = (Stacksn−1)+. We denote an n-stack s as s = [s1 . . . s|s|]n, where s|s| is the topmost (n − 1)stack of s. The stack [[]n−1]n is denoted []n for short, and abusively called the empty stack. We define the operations copyn which copies the topmost (n − 1)-stack on the top of the stack it is applied to, and copyn its inverse, i.e., it removes the topmost (n − 1)-stack of a stack if it is equal to the one right below it, and is not applicable otherwise. Formally, copyn and copyn are partial functions from Stacksn to Stacksn, defined by copyn([s1 · · · s|s|]n) = [s1 · · · s|s|s|s|]n, and copyn(s) = s0 if and only if copyn(s0) = s. We define the set of order n operations Opn = {copyn, copyn} ∪ Opn−1 and we define the application of an operation θ of Opn−1 to an n-stack s as θ(s) = [s1 · · · s|s|−1θ(s|s|)]n. Given θ ∈ Opn, we define θ¯ its inverse, i.e., pusha = popa, popa = pusha and copyi = copyi. Finally, we inductively define the topmost k-stack of an n-stack s = [s1 · · · s|s|]n as topn(s) = s, and topk(s) = topk(s|s|) for k < n. I Example 1. Assuming that Σ = {a, b}, we have pusha([[ab]1[b]1]2) = [[ab]1[ba]1]2, copy2([[[ab]1]2[[a]1[b]1]2]3) = [[[ab]1]2[[a]1[b]1[b]1]2]3, and copy2([[b]1[a]1]2) is not defined. An order n pushdown automaton, or n-PDA for short, or HOPDA if the order is left implicit, is a tuple A = (Q, qinit, Σ, Δ), where Q is a finite set of states, qinit is an initial state, Σ is a stack alphabet and Δ ⊆ Q × Opn × Q is a finite set of transitions. A transition 1 For termination, simply make all transitions output a letter and make all states accepting, then observe that the resulting HOPDA terminates if, and only if, its language is finite. This observation follows from König’s lemma together with the fact that HOPDA are finitely branching. For boundedness, add a new accepting state that the HOPDA may non-deterministically jump to, and from which it “dumps” the contents of the stack on the output tape. All original states are non-accepting and all original transitions are silent. It is readily seen that the resulting HOPDA is bounded if, and only if, its language is finite. (p, θ, q) ∈ Δ is also written as p −→θ q. A configuration of A is a pair (q, s), where q ∈ Q and s ∈ Stacksn. The initial configuration is (qinit, []n). A step of A is a triple ((p, s), θ, (q, t)), where (p, s) and (q, t) are configurations and θ is an operation, such that p −→θ q and t = θ(s). Such a step is also written as (p, s) −→θ (q, t). A run of A is an alternating sequence (q0, s0), θ1, (q1, s1), . . . , θk, (qk, sk) of configurations (qi, si) and operations θi, such that (qi−1, si−1) −θ→i (qi, si) for every 0 < i ≤ k. Such a run is also written as (q0, s0) −θ→1 (q1, s1) · · · −θ→k (qk, sk), and it is called initialised when (q0, s0) is the initial configuration. The reachability set of A is the set of configurations (q, s) such that there is an initialised run in A that ends with (q, s). I Remark. Instead of the copyn operation, the literature usually considers a popn operation that destroys the topmost (n − 1)-stack (provided that there is one below it). Formally, popn([s1 · · · s|s|−1s|s|]n) = [s1 · · · s|s|−1]n if |s| > 1 and is undefined otherwise. Following Carayol [2], we prefer the more symmetric operation copyn that destroys the topmost (n − 1)-stack only if it is equal to the previous one. Higher-order pushdown vector addition systems with states. We let N denote the set of natural numbers N = {0, 1, . . .} and we let Z denote the set of integers Z = {. . . , −1, 0, 1, . . .}. Consider a dimension d ∈ N with d > 0. Given a set S and a vector x in Sd, we let x(c) denote the cth component of x, i.e., x = (x(1), . . . , x(d)). An order n pushdown vector addition system with states of dimension d, or d-dim nPVASS for short, or HOPVASS if the order and the dimension are left implicit, is a tuple S = (Q, qinit, Σ, Δ), where Q is a finite set of states, qinit is an initial state, Σ is a stack alphabet and Δ ⊆ Q × Zd × Opn × Q is a finite set of transitions. Vectors a ∈ Zd are called actions. A configuration of S is a triple (q, x, s), where q ∈ Q, x ∈ Nd and s ∈ Stacksn. Intuitively, the dimension d is the number of counters, and x(1), . . . , x(d) are the values of these counters. The initial configuration is (qinit, 0, []n). A step of S is a triple (p, x, s) −a−,→θ (q, y, t), where (p, x, s) and (q, y, t) are configurations, a is an action and θ is an operation, such that p −a−,→θ q, y = x + a and2 t = θ(s). The notions of run, initialised run and reachability set are defined in the same way as for n-PDA. Coverability, termination and boundedness. We investigate in this paper three basic verification problems on HOPVASS, namely coverability, termination and boundedness. The coverability problem asks, given a HOPVASS S and a state q of S, whether the reachability set of S contains a configuration whose state is q. The termination problem asks, given a HOPVASS S, whether all initialised runs of a S are finite. The boundedness problem asks, given a HOPVASS S, whether the reachability set of S is finite. Observe that HOPVASS are finitely branching, i.e., each configuration is the source of only finitely many steps. This entails that termination is Turing-reducible to boundedness for HOPVASS. Indeed, if the reachability set of a HOPVASS is infinite, then it necessarily has an infinite initialised run, by König’s lemma (applied to its reachability tree). Otherwise, we may decide whether it has an infinite initialised run by exploring its reachability graph, which is finite and computable. 2 The definition of configurations requires counters to be nonnegative. So the equality y = x + a carries the implicit condition that x + a ≥ 0. Undecidability of Coverability and Termination for 1-dim 2-PVASS It is known that coverability is decidable for 1-dim 1-PVASS [16] and that termination and boundedness are decidable for 1-PVASS of arbitrary dimension [14]. We show in this section that all three problems are undecidable for HOPVASS, even in the restricted subcase of 1-dim 2-PVASS. Our proof proceeds by reduction from coverability and termination in Minsky counter machines, through a stepwise simulation of these machines by 1-dim 2-PVASS. We use a non-standard presentation of Minsky counter machines (simply called counter machines in the sequel) that is close to VASS and more convenient for our purpose than the standard one. A d-counter machine is a triple M = (Q, qinit, Δ), where Q is a finite set of states, qinit is an initial state and Δ ⊆ Q × (Z ∪ {T})d × Q is a finite set of transitions. Vectors a ∈ (Z ∪ {T})d are called actions. A configuration of M is a pair (q, x), where q ∈ Q and x ∈ Nd. The initial configuration is (qinit, 0). A step of M is a triple (p, x) −→a (q, y), where (p, x) and (q, y) are configurations and a is an action, such that p −→a q and (1) (y(c) = x(c) + a(c) if a(c) ∈ Z y(c) = x(c) = 0 if a(c) = T for every counter 1 ≤ c ≤ d. The notions of run, initialised run and reachability set are defined in the same way as for n-PDA. It is well-known that, for every d ≥ 2, coverability, termination and boundedness are undecidable for d-counter machines. Our simulation of a d-counter machine M by a 1-dim 2-PVASS S roughly proceeds as follows. To prevent confusion between the counters of M and the counter of S, we will denote the latter by κ. When S is idle, meaning that it is not simulating a step of M, its counter κ T )d is an is zero and its stack is of the form [[(T, . . . , T)a1 · · · ak]1]2, where each ai ∈ (Z ∪ { } action of M. Intuitively, the word w = a1 · · · ak, which we call the history, is the sequence of actions that M has taken to reach its current configuration. The vector (T, . . . , T) acts as a bottom symbol. To simulate a step (p, x) −→a (q, y) of M, S pushes a onto its stack, which becomes [[(T, . . . , T)wa]1]2, and then it checks that its new history wa corresponds to a legal sequence of actions (starting from 0) with respect to Equation 1. To perform this check, S uses the history w and its counter κ to verify, for each counter 1 ≤ c ≤ d, that wa is legal with respect to c. It can do so without destroying the history thanks to copy2 and copy2 operations. When all checks are complete, S is again idle, its counter κ is zero and its stack is [[(T, . . . , T)wa]1]2. We now present our simulation of d-counter machines by 1-dim 2-PVASS in detail. We start with some additional notations. Given an action a ∈ (Z ∪ {T})d, we let −→a denote the binary relation on Nd defined by x −→a y if Equation 1 holds. Given a word w = a1 · · · ak of actions ai ∈ (Z ∪ {T})d, we let −→w denote the binary relation on Nd defined by x −→w y if itshethree eidxeisnttsitxy0r,e.l.a.t,ioxnk osuncNh t.haTthxe n=otxa0ti−oa→n1xx1−→w·· ·m−ae→kanxskth=atyx,w−→withytfhoer csoonmveenyt.ioWnethdaetfi−→nεe d the displacement δ(w) of a word w = a1 · · · ak in (Zd)∗ by δ(w) = a1 + · · · + ak. Observe that, for such a word w ∈ (Zd)∗, it holds that x −→w y if, and only if, x + δ(w) = y and x + δ(v) ≥ 0 for every prefix v of w. T )d)∗ We extend the vector notation a(c) to sequences of actions a1 · · · ak ∈ ((Z ∪ { } by letting (a1 · · · ak)(c) denote the word in Z∗ defined by (a1 · · · ak)(c) = a1(c) · · · ak(c). Note that for every x, y ∈ Nd and w ∈ ((Z ∪ {T})d)∗, it holds that x −→w y if, and only if, x(c) −w−(−c→) y(c) for every 1 ≤ c ≤ d. Observe that the relation −→w is forward-deterministic, A E copy2 copy2 B F peeka for a ∈ Σ with a(c) = T peeka for a ∈ Σ with a(c) = T (a) The gadgets Fc and Bc that apply, forward for Fc and backward for Bc, the current history of the cth counter. The constant K ∈ N satisfies |a(c)| < K for every a ∈ Σ with a(c) 6= T. Fc (b) The gadget Cc that checks that the most recent action of the history is applicable for the cth counter. (c) Translation of a d-counter machine M into a 1-dim 2-PVASS S. i.e., x −→w y ∧ x −→w y0 =⇒ y = y0. In a d-counter machine or 1-dim 2-PVASS, given two configurations α and β, we let α −→∗ β denote the existence of a run from α to β. We fix, for the remainder of this section, a d-counter machine M = (Q, qinit, Δ). Let a Σ ⊆ (Z ∪ {T})d denote the set of actions of M, formally, Σ = {a | ∃p, q : p −→ q}. We build from M a 1-dim 2-PVASS S with stack alphabet Σ. To simplify the presentation, we introduce, for every a ∈ Σ, a new order 1 operation peeka = pusha ◦ popa that tests, without changing the stack, that the topmost letter of the stack is an a, and is not applicable otherwise. The addition of these peeka operations has no impact on the decidability status of coverability, termination and boundedness for 1-dim 2-PVASS. We present the 1-dim 2-PVASS S that simulates the d-counter machine M in a “bottom up” fashion. Recall that κ denotes the counter of S. We start with two gadgets Fc and Bc, where 1 ≤ c ≤ d, that apply on κ, forward for Fc and backward for Bc, the current history of the cth counter of M. More precisely, they apply the displacement of the suffix v of the history that starts after the most recent zero-test on c. These gadgets are depicted in Figure 1a. Let us explain the behaviour of Fc. We ignore K for the moment. Firstly, the copy2 from A to B copies the history so that it can be restored before leaving the gadget. The loop on B together with the transition from B to C locates the most recent zero-test on c in the history. The loop on C guesses the suffix v of the history and replays v(c) ∈ Z∗ on the counter κ. Lastly, the copy2 from C to D ensures that the guesses made in state C are correct and restores the stack to its original contents before entering the gadget. The increments by K in the loop on B are matched by decrements by K in the loop on C. So they do not change the global displacement realised by Fc, which is δ(v(c)). Their purpose is to ensure that the loop on C can be taken only finitely many times. This is crucial for termination. The behaviour of Bc is identical to that of Fc except that when v(c) ∈ Z∗ is replayed on the counter κ, the opposite of each action is applied instead of the action itself. So the global displacement realised by Bc is −δ(v(c)). The following lemma shows that Fc and Bc behave as expected. All proofs of this section can be found in Appendix A. I Lemma 2. Let x, y ∈ N and s, t ∈ Stacks2. Assume that s = [[ubv]1]2 where b ∈ Σ and u, v ∈ Σ∗ are such that b(c) = T and v(c) ∈ Z∗. Then the following assertions hold: (A, x, s) −→∗ (D, y, t) in Fc if, and only if, s = t and x + δ(v(c)) = y, (E, x, s) −→∗ (H, y, t) in Bc if, and only if, s = t and x − δ(v(c)) = y. and 0 −w−(−c)−a−(c→). Our next gadget, the 1-dim 2-PVASS Cc, where 1 ≤ c ≤ d, is depicted in Figure 1b. It uses the gadgets Fc and Bc as subsystems. It is understood that each action a ∈ Σ with a(c) = T induces a distinct copy of Bc in Cc. Provided that κ = 0 and that w is a legal sequence of actions 0 −→w x, the gadget Cc checks that the most recent action a of the history wa is applicable for the cth counter of M, i.e., that x(c) −a−(c→). If a(c) ∈ Z then Cc goes through Fc, which checks that x(c) + a(c) ≥ 0 and exits with κ = x(c) + a(c), and then it goes through Bc, which reverts the changes that Fc did on κ. If a(c) = T then Cc pops a and then goes through Bc, which checks that x(c) ≤ 0 (hence, x(c) = 0) and exits with κ = 0, and then pushes a back. In both cases, κ and the stack are restored to their original contents before entering the gadget. The following lemma shows that Cc behaves as expected. I Lemma 3. Let y ∈ N and s, t ∈ Stacks2. Assume that s = [[(T, . . . , T)wa]1]2 where a ∈ Σ and w ∈ Σ∗ are such that 0 −→w. Then (I, 0, s) −→∗ (J, y, t) in Cc if, and only if, y = 0, s = t We are now ready to present our translation of the d-counter machine M into an “equivalent” 1-dim 2-PVASS S. The translation is depicted in Figure 1c, and corresponds to the informal description of S that followed the definition of d-counter machines. It is understood that each state q of M induces distinct copies of C1, . . . , Cd in S. The initial state of S is qginit. We need a few additional notations to prove that this translation is correct in the sense that it preserves coverability and termination. Given x ∈ Nd and s ∈ Stacks2, we let x ./ s denote the existence of w ∈ Σ∗ such that 0 −→w x and s = [[(T, . . . , T)w]1]2. Given two configurations (qe, x, s) and (q, y, t) of S, where q ∈ Q is a state of M, we let (qe, x, s) (q, y, t) denote a run in S from (qe, x, s) to (q, y, t) with no intermediate state in Q. Put differently, such a run is the concatenation of runs of C1, . . . , Cd except for its first and last steps (see Figure 1c). The correctness of the translation of M into S is shown by the following two lemmas. Lemma 4 shows that ./ induces a “weak simulation” relation from M to S. This lemma is used to show that every (possibly infinite) initialised run of M can be translated into an initialised run of S. Lemma 6 shows that the subsystem qe → C1 → · · · → Cd → q of S works as expected, in that it correctly checks that the most recent action of the history is applicable provided that the previous actions of the history are applicable. This lemma is used to show that every (possibly infinite) initialised run of S can be translated back into an initialised run of M. I Lemma 4. Let x ∈ Nd and s ∈ Stacks2 such that x ./ s . For every step (p, x) −→a (q, y) in M, there exists a run (p, 0, s) −p−us−h→a (qe, 0, t) (q, 0, t) in S with y ./ t . (qek, 0, sk) is an initialised run (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (q0, 0, s0) −p−u−sh−a→1 (qe1, 0, s1) (qk, 0, sk) · · · in S. I Corollary 5. For every initialised run (q0, x0) −a→1 (q1, x1) · · · −a→k (qk, xk) · · · in M, there pushak (q1, 0, s1) · · · −−−−→ I Lemma 6. Assume that t =(q[[,(xT,, s..),. ,iTt)hwoald]s1]2thwathexre= a0,∈s Σ= tanadndw0∈−w−→Σa∗. are such that 0 −→w. For every run (qe, 0, t) I Corollary 7. Every initialised run of S that is infinite or ends with a configuration pusha1 whose state is in Q, is of the form (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (q0, 0, s0) −−−−→ (qe1, 0, s1) (q1, 0, s1) · · · −p−u−sh−a→k (qek, 0, sk) (qk, 0, sk) · · · with qi ∈ Q. Moreover, for every such run in S, there is an initialised run (q0, x0) −a→1 (q1, x1) · · · −a→k (qk, xk) · · · in M. An immediate consequence of Corollaries 5 and 7 is that the coverability and termination problems for d-counter machines are many-one reducible to the coverability and termination problems for 1-dim 2-PVASS, respectively. Since coverability and termination are undecidable for 2-counter machines, they are also undecidable for 1-dim 2-PVASS. Moreover, as mentioned in Section 2, termination is Turing-reducible to boundedness for 1-dim 2-PVASS, since they are finitely branching. We have shown the following theorem. I Theorem 8. The coverability problem, the termination problem and the boundedness problem are undecidable for 1-dim 2-PVASS. I Remark. Theorem 8 also holds for 1-dim 2-PVASS defined with pop2 operations instead of copy2 operations. Indeed, we may replace the gadgets Fc and Bc by “equivalent” ones using pop2 and no copy2. Intuitively, instead of guessing and replaying in state C the suffix of the history, Fc0 copies the history twice. Each loop uses a fresh copy of the history and then destroys this copy with a pop2. Both loops use popa operations to browse through the history (backwards). The construction of Bc0 is similar. The new gadgets Fc0 and Bc 0 also satisfy Lemma 2, and it follows that the resulting 1-dim 2-PVASS S0 also simulates the d-counter machine M. Iterability of Operation Sequences In this section, we show that we can characterise exactly the sequences of operations which can be applied arbitrarily many times to a given stack. This result is used in the next section to provide a semi-decision procedure for the non-boundedness and non-termination problems for HOPVASS, using the Karp and Miller reduced tree (as used in [14]). We consider sequences of operations in Opn, called n-blocks for short. Given ρ = θ1 · · · θm ∈ Op∗n, we identify it with the partial function ρ = θm ◦ θm−1 ◦ · · · ◦ θ1. We denote by dom(ρ) the set of n-stacks s such that ρ(s) is defined. We define ρ¯ = θm · · · θ1, and observe that ρ¯ is the partial inverse of ρ. We want to characterise n-blocks which are iterable, i.e., which can be applied arbitrarily many times to a given stack. I Definition 9. An n-block ρ is iterable on a stack s if for all i, s ∈ dom(ρi). To investigate iterability, we are interested in the global effect of an n-block while keeping track of its condition of application. We thus need a normal form of sequences which keeps track of these two things, and a criterion on this normal form to determine if it is iterable or not. Following Carayol [ 2, 3 ], we say that an n-block is reduced if it does not contain any factor of the form θθ¯ with θ ∈ Opn (in [2], they are called minimal, most details come from [ 3 ]). Given an n-block ρ, we let red(ρ) denote the unique reduced operation sequence obtained from ρ by recursively removing θθ¯ factors. It is immediate to observe that red(ρ)(s) = ρ(s) for every stack s ∈ dom(ρ). In particular, dom(ρ) ⊆ dom(red(ρ)). Intuitively, red(ρ) is the minimal n-block performing the transformation performed by ρ, in the sense that it does not contain a factor which does not modify the stack. The reduced n-block is thus a good normal form for determining the global effect of an n-block. However, reducing an n-block may yield an n-block with a wider domain, e.g., popapusha is only applicable to stacks whose topmost symbol is an a, while its reduced n-block is ε which is applicable to all stacks. Thus, red is not a good tool to investigate iterability. We need to preserve the domain of applicability of an n-block, while getting rid of factors that are always applicable and do not modify the stack. To do this, Carayol adds test operations in the normal form of regular sets of n-blocks, at the expense of augmenting the number of operations and having no real normal form for n-blocks themselves as some tests may be redundant and not easy to eliminate. We propose here a different approach which consists in associating to each n-block, n normal blocks, one for every order. Each keeps track of the destructive operations of its order the original n-block has to perform to be applicable, while getting rid of factors which do not modify the stack and do not restrict the domain of application, and reducing the block in the classical sense for lower orders. To this end, we define a weaker variant of reduction, redn, which does not remove factors of the form copyncopyn but is otherwise identical to red. The idea will thus be to consider for an n-block ρ, all its weak reduced blocks at every order: ρ will be iterable if, and only if, for every k, redk(ρ) is iterable. Furthermore, it will be possible to check syntactically whether redk(ρ) is iterable or not. The restriction of an n-block ρ to an order k, written ρ|k, is the k-block obtained by removing every operation of order strictly higher than k in ρ, e.g., (pushacopy2pushb)|1 = pushapushb. I Definition 10. Given an n-block ρ and an order k ≤ n, we call redk(ρ) the only k-block obtained from ρ|k by applying the following rewriting system: θθ¯ → ε, for θ ∈ Opk\{copyk} if k > 1, and θ ∈ {pusha | a ∈ Σ} if k = 1. Observe that this rewriting system is confluent, as is the classical reduction rewriting system. Therefore, redk(ρ) can be computed in linear time, by always reducing the leftmost factor first. The following theorem shows that the domain of an n-block ρ is equal to the intersection of the domains of the weak reductions of ρ of every order. This implies that red1(ρ), · · · , redn(ρ) is indeed a good normal form of ρ in the sense it describes entirely the effect of ρ in a canonical way, and it preserves its domain of definition (contrary to red(ρ)). Intuitively, this result comes from the fact than whenever a reduction step (in red) enlarges the domain of applicability of ρ, it is due to the removal of a factor of the form copykcopyk (or popapusha). As such factors are left intact in redk(ρ), a stack added to the domain of red(ρ) in this way is not added to the domain of redk(ρ). I Theorem 11. For every n-stack s and n-block ρ, s ∈ dom(ρ) if, and only if, for every k ≤ n, s ∈ dom(redk(ρ)). Proof. (⇒) Suppose s ∈ dom(ρ). By definition of application, s ∈ dom(ρ|k) for every k. We observe that dom(ρ|k) ⊆ dom(redk(ρ)) as every reduction step cannot restrict the domain of application. It follows that s ∈ dom(redk(ρ)). (⇐) For the sake of simplicity, in the following we suppose that when we reduce a copykcopyk, copyk could not be matched with some copyk at its left and similarly for copyk at its right (w.l.o.g., as the system is confluent). We show that for every weak reduction step, either dom(ρ) is not modified, either it is increased, but every stack added to it is not in one of the dom(redk(ρ)): If ρ = ρ1copykcopykρ2 for k ≤ n (resp. ρ1pushapopaρ2), then dom(ρ) = dom(ρ1ρ2). If ρ = ρ1copykcopykρ2 for k < n (resp. ρ1popapushaρ2), then for every stack s in dom(ρ1ρ2)\dom(ρ), we get that ρ1(s) ∈/ dom(copyk) (resp. ρ1(s) ∈/ dom(popa)). As redk(ρ) = redk(ρ1)copykcopykredk(ρ2) (resp. red1(ρ1)popapushared1(ρ2)) and redk(ρ1)(s) = ρ1|k(s), we get that s ∈/ dom(redk(ρ)). Therefore, by induction on the weak reduction steps of ρ, if s ∈ dom(redn(ρ))\dom(ρ) then s ∈/ dom(redk(ρ)) for some k < n. As furthermore for every k ≤ n, s ∈ dom(ρ) implies that s ∈ dom(ρ|k), and therefore that s ∈ dom(redk(ρ)), we get the result. J The rest of this subsection is devoted to proving that it is decidable whether an n-block is iterable on some stack or not. The decision algorithm is based on the observation that when ρ is iterable then for every k, redk(ρ) can be written as ρEk ρIk ρEk and ρIk does not contain copyk (see Theorem 15). Thus, if we iterate ρ, at each level, the accumulated effect will only be the accumulated effect of ρIk , as ρEk and ρEk cancel each other. We show that ρ is iterable if, and only if, this accumulated effect does not decrease the “size of the stack” at any level, i.e., ρIk does not contain copyk for any k. The proof, if rather technical in its formulation, only relies on the definition of the weak reduction and the two following auxiliary lemmas, the second being proven in Appendix B. I Lemma 12 ([ 3 ], Lemme 4.1.7). For every n-block ρ, red(ρ) = ε if, and only if, there is a stack s such that s = ρ(s). I Corollary 14. For every n-block ρ and order k, if redk(ρ) is of the form ρ1ρ2ρ1 with ρ1 containing a copyk (or a popa if k = 1), then dom(ρ) = ∅. Proof. If ρ1 contains a copyk (resp. popa), then ρ1 contains a copyk (resp. pusha), therefore, redk(ρ) contains a factor of the form copykρ0copyk (resp. pushaρ0popa). J I Theorem 15. Given a stack s and an n-block ρ, ρ is iterable on s if, and only if, s ∈ dom(ρ) and for every k ≤ n, redk(ρ) is of the form ρEk ρIk ρEk with ρIk containing no copyk (or no popa if k = 1). Proof. (⇒) We only do the proof for k ≥ 2. The case for k = 1 is similar. It suffices to replace copyk and copyk with pusha and popa. Suppose that ρ is iterable on a stack s, i.e., for every i, s ∈ dom(ρi). Fix k and let ρEk be the largest suffix of redk(ρ) such that redk(ρ) = ρEk ρIk ρEk . Notice that this choice of ρEk implies that redk(ρI2k ) = ρI2k . We prove by way of contradiction that so does ρIk . Suppose now that ρIk contains an occurrence of copyk. From Lemma 13, we get that ρIk = ρ1copykρ2 with ρ1 containing no order k operation. Suppose that ρ2 contains a copyk. Lemma 13 entails that ρ2 = ρ3copykρ4 and ρ4 contains no order k operation. By maximality of ρEk , we get redk(ρ2) = ρEk ρ1copykρ3copykρ4ρ1copykρ3copykρ4ρEk . As s ∈ dom(ρ2), by the definition of application, we get that (ρ4ρ1)(s0) = s0, for s0 = (ρEk ρ1copykρ3copyk)(s). From Lemma 12, we get red(ρ4ρ1) = ε. As it contains no order k operations, we also have that redk(ρ4ρ1) = ε. But then, we obtain that redk(ρI2k ) = redk(ρ1copykρ3copykρ4ρ1copykρ3copykρ4) = redk(ρ1copykρ3copykρ4ρ1copykρ3copykρ4) = redk(ρ1copykρ3copykcopykρ3copykρ4) and redk(ρI2k ) = redk(ρ1copykρ3ρ3copykρ4) so that redk(ρI2k ) 6= ρI2k . This is in contradiction with the remark we made above. Therefore, ρ2 does not contain any copyk. Thus, ρIk contains some copyk and does not contain any copyk. Therefore for every s0, topk(ρIk (s0)) has strictly less (k − 1)-stacks than s0 and ρIk is only applicable a finite number of times to all stacks. From Theorem 11, as s ∈ dom(ρi) for all i, we get that s ∈ dom(redk(ρ)i) for all i, and therefore ρEk (s) ∈ dom(ρiIk ), for all i. We thus get a contradiction. Therefore, for every k, ρIk contains no copyk operation. (⇐) We proceed by induction on the order n. Let us consider a 1-block ρ and a stack s ∈ dom(ρ) such that red1(ρ) = ρE1 ρI1 ρE1 , with ρI1 containing no popa with a ∈ Σ. As s ∈ dom(ρ), Corollary 14 shows that ρE1 contains no popa with a ∈ Σ. As a consequence, we have that dom(ρI1 ) = dom(ρE1 ) = Stacks1. Therefore ρE1 (s) ∈ dom(ρiI1 ρE1 ) for all i, and ρE1 (s) is defined as s ∈ dom(ρ). As red1(ρi) = ρE1 ρiI1 ρE1 , we get s ∈ dom(red1(ρi)) for all i. Using Theorem 11, we get that ρ is iterable on s. Suppose now that the property holds for (n − 1)-blocks, and consider an n-block ρ and a stack s ∈ dom(ρ) such that for all k ≤ n, redk(ρ) = ρEk ρIk ρEk with ρIk containing no copyk. From Corollary 14, we get that ρEk contains no copyk as well. Let us show that s ∈ dom(redn(ρi)) for all i. By hypothesis of induction, ρ|n−1 is iterable on s, thus s ∈ dom((ρ|n−1)i) for all i. Observe (ρ|n−1)i = ρi|n−1. Therefore, s ∈ dom((redn(ρi)|n−1)) for all i, as the latter can be obtained from ρi|n−1 by applying reduction steps. As s ∈ dom(ρ), ρEn (s) is defined, we deduce that ρEn (s) ∈ dom((ρiIn ρEn )|n−1) for all i. Given an n-block ρ0 containing no copyn, as copyn is applicable to all stacks, we get that for all s0, if s0 ∈ dom(ρ0|n−1), then s0 ∈ dom(ρ0). Thus ρEn (s) ∈ dom(ρiIn ρEn ) for all i, which entails s ∈ dom(redn(ρi)) for all i. As by hypothesis of induction, s ∈ dom(redk(ρi)) for all i and k < n, by Theorem 11, we deduce that ρ is iterable on s. J An important remark which stems from the characterisation of Theorem 15 is that a block is either applicable only finitely many times to every stack, or it is applicable arbitrarily many times to every stack on which it can be applied once. A +1, pushb B −1, pusha C +2, popb E −1, pusha −1, pushb Testing Non-Termination and Unboundedness In this section, we use the characterisation of iterable blocks of Theorem 15 to obtain a semi-algorithm à la Karp and Miller for the termination and boundedness problems for HOPVASS. As seen in Section 3, these problems are undecidable. It is however possible to search for witnesses of non-termination, and, with a slight modification, unboundedness. We first present the semi-algorithm, called reduced reachability tree, and prove its correctness. Then, we present an example of HOPVASS on which it does not terminate. Finally, we prove that the semi-algorithm always terminates on HOPDA, and so is a decision procedure for the termination and boundedness problems for HOPDA. We also recall that it is a decision procedure for 1-PVASS as well [14]. We borrow its presentation from that paper. We define the reachability tree of a d-dim n-PVASS S as follows. Nodes of the tree are labelled by configuration of S. The root r is labelled by the initial configuration (qinit, 0, []n), written r : (qinit, 0, []n). Each node u : (p, x, s) has one child v : (q, y, t) for each step (p, x, s) −a−,→θ (q, y, t) in S, and the edge from u to v is labelled by the pair (a, θ). Notice that the reachability tree of S is finitely branching. We say that a node u : (p, x, s) subsumes a node v : (q, y, t) if u is a proper ancestor of v, p = q, x ≤ y, and the block ρ from u to v is iterable on s. Furthermore, we say that u strictly subsumes v if x < y or red(ρ) is not ε. I Theorem 16. If the reachability tree of a d-dim n-PVASS S contains two nodes u and v such that u subsumes v (resp., u strictly subsumes v), then S has an infinite initialised run (resp., an infinite reachability set). For VASS [ 10 ], WSTS [6, 7] and 1-PVASS [14], it can be shown that every infinite branch of the reachability tree contains two nodes such that one subsumes the other. Therefore, termination and boundedness can be solved by constructing the so-called reduced reachability tree, or RRT for short, which is constructed like the reachability tree, but on which every branch is stopped at the first node subsumed by one of its ancestors. When the RRT of an HOPVASS is finite, it can be computed and it contains enough information to decide termination and boundedness. As seen in Section 3, boundedness is undecidable for HOPVASS. Figure 2 depicts an example of a 1-dim 2-PVASS whose reduced reachability tree is infinite. There is only one infinite run in this HOPVASS, and for any two configurations with the same state in this run, either the latter one has a smaller counter value, or the sequence of operations between them is not an iterable n-block. That can be proven by an easy case study on the configurations (see Appendix C). We now turn to show that the RRT is finite in natural subcases of HOPVASS. In [14], it is shown that the RRT is finite for 1-PVASS, by replacing, in the definition of subsumption, our iterability condition with the condition that s is a prefix of every stack appearing on the path from u to v. Actually, the technique presented here yields, at order 1, a (slightly) smaller RRT than in [14], as contrary to it, we can detect that a block in the tree is iterable even if it destructs the stack and then reconstructs it. The rest of the section is devoted to the proof that the RRT is also finite in the case of HOPDA. We first have to introduce some notations and recall some facts. I Lemma 17. If ρ is a block applicable to []n, then for every order k, redk(ρ) = red(ρ|k) and redk(ρ) contains no copyk (no popa if k = 1). From [ 2, 3 ], for every n-stack s, there exists a unique reduced block ρs such that ρs([]n) = s. We define a norm on n-stacks, such that ||s|| is the length |ρs| of the reduced block ρs. For every k ≤ n, we define ||s||k = ||topk(s)||. We make the following observations. I Lemma 18. For every n-stack s and orders k < k0 ≤ n, it holds that ||s||k ≤ ||s||k0 . I Lemma 19. Given m, there are at most (2(|Γ|+n−1)−1)m n-stacks s such that ||s||n = m. We are now ready to prove the main result of this section, namely that the RRT of an HOPDA is finite. To do so, we investigate all possible forms of infinite branch that can appear in the reachability tree of an HOPDA and show that, in all cases, it is possible to extract an iterable block between two nodes with the same state. The easy case is when the branch visits only finitely many stacks, hence, finitely many configurations. In that case, there are two identical configurations on the branch, and the block between them is obviously iterable. The other case is more involved. When the RRT has an infinite branch, this branch represents an infinite run (q0, s0) −θ→1 (q1, s1) · · · −θ→k (qk, sk) · · · , with q0 = qinit and s0 = []n. We then consider the smallest order k for which the sequence (||si||k)i∈N, is unbounded. For every m, we show that we can extract a particular subsequence of positions j1, · · · , jm such that ||sji ||k = i, and that for all stacks s between sji+1 and sjm , ||s||k > i. We then show that the reduced sequence red(θji+1 · · · θji0 ) does not contain any copyk0 with k0 ≥ k. As k is the smallest order such that (||si||k)i∈N is unbounded, there are finitely many (k − 1)-stacks that can appear at the top of the stacks si. Consequently we can find a subsequence of j1, · · · , jm such that the topmost k − 1 stack is the same for all the stacks sji with 1 ≤ i ≤ m. When m is chosen to be large enough, there must be i and i0 so that qji = qj0 . Then the i conditions are met for us to use Theorem 15. This gives us an iterable block between two nodes with the same state on the infinite branch we considered. I Theorem 20. The reduced reachability tree of an HOPDA is finite. Proof. We consider an n-PDA and suppose its RRT is infinite. By Koenig’s Lemma, it contains an infinite branch (q0, s0 = []n), (q1, s1), (q2, s2), · · · , and for every i ≥ 1, we call θi the operation such that si = θi(si−1). We thus get an infinite n-block θ1θ2 · · · . Observe that for every i, we get ρsi = red(θ1 · · · θi). Suppose that the sequence of ||si||n is bounded, i.e., there exists m ∈ N such that for every i, ||si||n ≤ m. From Lemma 19 there are finitely many n-stacks of norm at most m. Therefore, there is a stack s such that there are infinitely many i such that s = si. As Q is finite, there are two positions i < j such that (qi, si) = (qj, sj). Thus, si = (θi+1 · · · θj)(si), and therefore θi+1 · · · θj is iterable on si. Therefore i subsumes j, which contradicts the fact that the branch considered is infinite in the RRT. Suppose now that the sequence of ||si||n is unbounded, i.e., for every m ∈ N, there exists i such that ||si||n > m. As for every k < k0 ≤ n and s ∈ Stacksn, ||s||k ≤ ||s||k0 (Lemma 18), we can fix k such that for every k0 ≥ k, the sequence of ||si||k0 is unbounded, but for every k0 < k the sequence of ||si||k0 is bounded. For every m ∈ N, we define jm the first position at which ||si||k = m, i.e., jm = min(i | ||si||k = m). Given p < m ∈ N, we define i(p, m) the last position before jm at which ||si||k = p, i.e., i(p, m) = max(i < jm | ||si||k = p). As for every stack s, operation θ and order k, ||s||k − 1 ≤ ||θ(s)||k ≤ ||s||k + 1, the jm and i(p, m) are defined for every p ≤ m. Furthermore, observe that i(p, m) is strictly increasing with respect to p. Let us show that for every k0 ≥ k, for every p < p0 < m, there is no copyk0 in redk0 (θi(p,m)+1 · · · θi(p0,m)). Observe first that, as by definition ||si(p,m)||k < ||si(p,m)+1||k, θi(p,m) ∈ Opk. Suppose there is a position i with i(p, m) < i < i(p0, m) such that θi = copyk0 . As θ1 · · · θi is applicable to []n, redk0 (θ0 · · · θi) does not contain any copyk0 (Lemma 17), and therefore there is a position i0 < i such that θi0 = copyk0 , θi0+1 · · · θi−1 does not contain any copyk nor copyk and redk0 (θi0+1 · · · θi−1) = ε. As θi0+1 · · · θi−1 contains no copyk nor copyk, by definition of reduction, for every i0 < ` < i, ||s`||k ≥ ||si0 ||k = ||si||k. Suppose i0 < i(p, m) + 1, we thus have ||si(p,m)|| ≥ ||si||, which contradicts the definition of i(p, m). Therefore i0 ≥ i(p, m) + 1, and redk0 (θi(p,m)+1 · · · θi) does not contain any copyk0 . Thus, in any case, redk0 (θi(p,m)+1 · · · θi(p0,m)) does not contain any copyk0 . From Lemma 19, there are at most (2(|Γ| + n − 1) − 1)h+1 (k − 1)-stacks of norm at most h. We take m > |Q| ∗ (2(|Γ| + n − 1) − 1)h+1, where h is the highest value for ||si||k−1. Therefore, we can find |Q|+1 positions p1 < p2 < · · · < p|Q|+1 such that redk−1(θ1 · · · θi(pi,m)) is the same for every pi, and therefore, for every i < j, redk−1(θi(pi,m)+1 · · · θi(pj,m)) = ε. As from what precedes, for every k0 ≥ k and i < j, redk0 (θi(pi,m)+1 · · · θi(pj,m)) does not contain any copyk0 , from Theorem 15 we get that θi(pi,m)+1 · · · θi(pj,m) is iterable on si(pi,m). We can furthermore find i < j such that qi(pi,m) = qi(pj,m), and therefore, we get that (qi(pi,m), si(pi,m)) subsumes (qi(pj,m), si(pj,m)), which contradicts the fact that the RRT is infinite. J We derive from Theorem 20 that we can solve termination and boundedness for HOPDA by computing the RRT and checking whether it contains a (strictly) subsumed node. 6 Conclusion In this paper, we have investigated whether an approach à la Karp and Miller can be used to solve termination and boundedness for HOPVASS. On the negative side, we have shown that coverability, termination, and boundedness are all undecidable for HOPVASS, even in the restricted subcase of one counter and an order 2 stack. This is in sharp contrast with the same model at order 1, for which all three problems are decidable [14, 16]. On the positive side, we have identified a simple and decidable criterion characterising which sequences of higher-order stack operations can be iterated. Such a criterion is crucial for the implementation of Karp and Miller’s approach. While the resulting Karp and Miller procedure is only a semi-algorithm for HOPVASS, we have shown that it always terminates for HOPDA. Moreover, when dealing with 1-PVASS, this algorithm is a variant of the algorithm proposed in [14]. We have considered symmetric higher-order operations (as in [2]), namely copyn operations and their inverse copyn. Our undecidability results still hold for HOPVASS defined with popn operations instead of copyn. We conjecture that Karp and Miller’s approach can still be applied to HOPVASS with popn and yields an algorithm for HOPDA with popn. 14 15 16 17 18 19 20 21 22 A Proofs of Section 3 I Lemma 2. Let x, y ∈ N and s, t ∈ Stacks2. Assume that s = [[ubv]1]2 where b ∈ Σ and u, v ∈ Σ∗ are such that b(c) = T and v(c) ∈ Z∗. Then the following assertions hold: (A, x, s) −→∗ (D, y, t) in Fc if, and only if, s = t and x + δ(v(c)) = y, (E, x, s) −→∗ (H, y, t) in Bc if, and only if, s = t and x − δ(v(c)) = y. Proof. We start with the proof of the first assertion. Define w = ubv. Suppose that there are runs from (A, x, s) to (D, y, t) in Fc, and pick one of them. Since b(c) = T and a(c) 6= T for every action a occurring in v, the run necessarily begins with the following steps: (A, x, s) −c−op−y→2 (B, x, [[w]1[ubv]1]2) −→∗ (B, x0, [[w]1[ub]1]2) −p−ee−k→b (C, x0, [[w]1[ub]1]2) (2) where x0 = x + |v|K. Then, the run necessarily continues with the following steps: (C, x0, [[w]1[ub]1]2) −−−K−−+−a1−(−c)−,p−u−sh−a→1 · · · −−−K−+−−ak−(−c)−,p−u−sh−a→k (C, z, [[w]1[uba1 · · · ak]1]2) (3) for some z ∈ N and some actions a1, . . . , ak in Σ such that ai(c) 6= T for every 1 ≤ i ≤ k. It follows from the definition of steps in 1-dim 2-PVASS that x0 −(−−−K−+−a−1−(c−)−)·−··(−−−K−+−a−k−(−c→)) z. This entails that z = x0 − kK + δ(a1(c) · · · ak(c)). Finally, the run necessarily ends with the following step: (C, z, [[w]1[uba1 · · · ak]1]2) −c−op−y→2 (D, y, t) (4) It follows that y = z and that t = copy2([[w]1[uba1 · · · ak]1]2). The last equality entails that t = [[w]1]2 = s and w = uba1 · · · ak. Since w = ubv, we get that v = a1 · · · ak, hence, v(c) = a1(c) · · · ak(c). We conclude that y = z = x0 − |v|K + δ(v(c)) = x + δ(v(c)). Conversely, suppose that s = t and x + δ(v(c)) = y. Let us write v as v = a1 · · · ak with ai ∈ Σ. Note that ai(c) 6= T for every 1 ≤ i ≤ k, by assumption. Therefore, Equation 2 is a run in Fc, where x0 = x + kK. Observe that, for every 1 ≤ i ≤ k, x0 + (−K + a1(c)) + · · · + (−K + ai(c)) = x + (k − i)K + a1(c) + · · · + ai(c) = y + (K − ai+1(c)) + · · · + (K − ak(c)) and 0 −w−(−c)−a−(c→). It follows that x0 −(−−−K−+−a−1−(c−)−)·−··(−−−K−+−a−k−(−c→)) y. We deduce that Equation 3 is also a run in Fc, by letting z = y. Moreover, Equation 4 is also a run in Fc since z = y and w = ubv = uba1 · · · ak. By concatenating these three runs, we obtain that (A, x, s) −→∗ (D, y, t) in Fc. The second assertion follows from the first assertion by replacing v = a1 · · · ak with v0 = a10 · · · ak0, where ai0 differs from ai only in c, with ai0(c) = −ai(c). J I Lemma 3. Let y ∈ N and s, t ∈ Stacks2. Assume that s = [[(T, . . . , T)wa]1]2 where a ∈ Σ and w ∈ Σ∗ are such that 0 −→w. Then (I, 0, s) −→∗ (J, y, t) in Cc if, and only if, y = 0, s = t 0 −w−(−c)−a−(c→). derive that x −a−(c→), hence, 0 −w−(−c)−a−(c→). Proof. We may write (T, . . . , T)w = ubv for some b ∈ Σ and u, v ∈ Σ∗ such that b(c) = T and v(c) ∈ Z∗. This entails that Tw(c) = u(c)Tv(c). Since 0 −→w, we get that 0 −u−(−c)−T−v(−c→), hence, 0 −u−(−c)→T 0 −v−(c→) x for x = δ(v(c)). Observe that x −a−(c→) if, and only if, 0 −w−(−c)−a−(c→). The “only if” direction follows from 0 −w−(−c→) x and the “if” direction follows from forward determinism of −w−(−c→). We now proceed with the proof of the lemma. Suppose that (I, 0, s) −→∗ (J, y, t) in Cc. We consider two cases, depending on a(c). If a(c) ∈ Z then, by definition of Cc (see Figure 1b), we have (A, 0, s) −→∗ (D, z, t0) in Fc and (E, z, t0) −→∗ (H, y, t) in Bc, for some z ∈ N and t0 ∈ Stacks2. Recall that s = [[ubva]1]2. We get from Lemma 2 that s = t0 and δ(v(c)a(c)) = z, and we get from Lemma 2 that t0 = t and z − δ(v(c)a(c)) = y. It follows that y = 0 and x + a(c)) = δ(v(c)) + a(c)) = z ≥ 0. We The other case is when a(c) = T. In that case, by definition of Cc (see Figure 1b), we have (E, 0, s0) −→∗ (H, y, t0) in Bc, for some s0, t0 ∈ Stacks2 such that s0 = popa(s) and t = pusha(t0). Note that s0 = [[ubv]1]2. We get from Lemma 2 that s0 = t0 and −δ(v(c)) = y, hence, δ(v(c)) ≤ 0. It follows that x = δ(v(c)) = 0, and therefore x −→T. Moreover, we deduce from s0 = t0 that t = pusha(popa(s)) = s. We have shown that y = 0, s = t and x −a−(c→). Hence, Conversely, suppose that 0 −w−(−c)−a−(c→) and let us show that (I, 0, s) −→∗ (J, 0, s). Note that x −a−(c→) and let z ∈ N such that x −a−(c→) z. It follows that z = δ(v(c)a(c)) since x = δ(v(c)). Recall that s = [[ubva]1]2. We again consider two cases, depending on a(c). If a(c) ∈ Z then we get from Lemma 2 that (A, 0, s) −→∗ (D, z, s) in Fc, and we get from Lemma 2 that (E, z, s) −→∗ (H, 0, s) in Bc. It follows that (I, 0, s) −→∗ (J, 0, s) in Cc. If a(c) = T then x = 0 since x −a−(c→). Hence, δ(v(c)) = 0. Let s0 = [[ubv]1]2 and note that s0 = popa(s). We get that from Lemma 2 that (E, 0, s0) −→∗ (H, 0, s0) in Bc. It follows that (I, 0, s) −→∗ (J, 0, s) in Cc. J I Lemma 4. Let x ∈ Nd and s ∈ Stacks2 such that x ./ s . For every step (p, x) −→a (q, y) in M, there exists a run (p, 0, s) −p−us−h→a (qe, 0, t) (q, 0, t) in S with y ./ t . Proof. Consider a step (p, x) −→a (q, y) in M. Since x ./ s , there exists of w ∈ Σ∗ such that 0 −→w x and s = [[(T, . . . , T)w]1]2. Let t = pusha(s) = [[(T, . . . , T)wa]1]2. Observe that, by construction of S from M (see Figure 1c), (p, 0, s) −p−us−h→a (q, 0, t) is a step in S, since p −→a q e is a transition in M. Notice that 0 −w−→a y since 0 −→w x and x −→a y. This entails that y ./ t and that 0 −w−(−c)−a−(c→) for every 1 ≤ c ≤ d. We derive from Lemma 3 that (I, 0, t) −→∗ (J, 0, t) in Cc for every 1 ≤ c ≤ d. It follows that (qe, 0, t) (q, 0, t) in S. J I Corollary 5. For every initialised run (q0, x0) −a→1 (q1, x1) · · · −a→k (qk, xk) · · · in M, there pushak (q1, 0, s1) · · · −−−−→ is an initialised run (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (q0, 0, s0) −p−u−sh−a→1 (q1, 0, s1) e (qk, 0, sk) · · · in S. (qk, 0, sk) e Proof. Recall that the initial configuration of M is (q0, x0) = (qinit, 0) and that the initial configuration of S is (qginit, 0, []2). Observe that (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (qinit, 0, s0) in S for the stack s0 = [[(T, . . . , T)]1]2. Note that 0 ./ s 0. It follows from Lemma 4, by induction on i, pushai that there exists si ∈ Stacks2 and runs (qi−1, 0, si−1) −−−−→ (qei, 0, si) (qi, 0, si) in S with xi ./ s i, for every i ≥ 1. We obtain the desired initialised run of S by concatenating these runs. J I Lemma 6. Assume that t =(q[[,(xT,, s..),. ,iTt)hwoald]s1]2thwathexre= a0,∈s Σ= tanadndw0∈−w−→Σa∗. are such that 0 −→w. For every run (q, 0, t) e Proof. Consider a run (qe, 0, t) (q, x, s). Recall that means that the run can be decomposed into a first step (moving from qe to C1), a last step (moving from Cd to q), and runs of C1, . . . , Cd in between. So there exists x0, . . . , xd ∈ N and s0, . . . , sd ∈ Stacks2, with x0 = 0, s0 = t, xd = x and sd = s, such that (I, xc−1, sc−1) −→∗ (J, xc, sc) in Cc, for every 1 ≤ c ≤ d. We derive from Lemma 3, by induction on c, that xc = 0, sc = t and 0 −w−(−c)−a−(c→), for every 1 ≤ c ≤ d. It follows that x = xd = 0, t = sd = s and 0 −w−→a. J I Corollary 7. Every initialised run of S that is infinite or ends with a configuration pusha1 whose state is in Q, is of the form (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (q0, 0, s0) −−−−→ (q1, 0, s1) e (q1, 0, s1) · · · −p−u−sh−a→k (qek, 0, sk) (qk, 0, sk) · · · with qi ∈ Q. Moreover, for every such run in S, there is an initialised run (q0, x0) −a→1 (q1, x1) · · · −a→k (qk, xk) · · · in M. Proof. Consider an initialised run in S that is infinite or ends with a configuration whose state is in Q. If the run is infinite, then it visits infinitely many configurations whose state are in Q. Because if it were not the case, then an infinite suffix of the run would remain forever in the same Fc or Bc. This is impossible as each loop in Fc or Bc either shrinks the stack or decreases the counter κ, since K satisfies |a(c)| < K for every a ∈ Σ with a(c) 6= T. So the initialised run under consideration starts with the step (qginit, 0, []2) −p−u−sh−(−T,.−..−,→T) (qinit, 0, [[(T, . . . , T)]1]2) followed by a run of the form: (q0, x0, s0) −p−u−sh−a→1 (qe1, y1, t1) (q1, x1, s1) · · · −p−u−sh−a→k (qek, yk, tk) (qk, xk, sk) · · · where (q0, x0, s0) = (qinit, 0, [[(T, . . . , T)]1]2) and q1, . . . , qk, . . . are in Q. Observe that yi = xi−1 and ti = pushai (si−1), for every i ≥ 1. We derive from Lemma 6, by induction on such that 0 −a−1−··−·a→ii=xti.i =It i[s(Tr,e.a.d. i,lTy)ase1e·n· ·tahia]2t 0an−ad→10x−1a−1·−···−··a→i−a→,kfoxrk e· v·e·r.yTih≥is c1o. mLeestfrxoim∈ tNhe i, that xi = 0, s d observation that −→w is forward deterministic. Moreover, the construction of S from M (see Figure 1c) entails that qi−1 −a→i qi is a transition of M, for every i ≥ 1. It follows that (q0, 0) −a→1 (q1, x1) · · · −a→k (qk, xk) · · · is a run in M. J B Proofs of Section 4 I Lemma 13. For every n-block ρ and order k, if redk(ρ) contains a factor of the form copykρ0copyk (or pushaρ0popb if k = 1), then dom(ρ) = ∅. Proof. By contradiction, suppose s ∈ dom(ρ) and redk(ρ) contains a factor of the form copykρ0copyk (or pushaρ0popb if k = 1). Let ρ2 be one of the smallest such ρ0. If k = 1 then we get that ρ2 = ε, hence, redk(ρ) contains a factor of the form pushapopb. If a = b, this contradicts the fact that redk(ρ) is weakly reduced. If a 6= b, this contradicts the assumption that dom(ρ) 6= ∅. If k > 1 then we get that ρ2 ∈ Op∗k−1. We may write redk(ρ) = ρ1copykρ2copykρ3 for some ρ1 and ρ3. By Theorem 11, s ∈ dom(redk(ρ)). Therefore (ρ1copykρ2)(s) ∈ dom(copyk). As ρ2 ∈ Op∗k−1, we necessarily have ρ2(s) = s. By Lemma 12, red(ρ2) = ε, and as it is in Op∗k−1, we derive that redk(ρ2) = ε. Therefore, redk(ρ) = ρ1copykcopykρ3, which contradicts the fact that redk(ρ) is weakly reduced. J C Proofs and comments of Section 5 I Lemma 21. For every n-block ρ and every i ≥ 1, red(ρ) = ε if, and only if, red(ρi) = ε. Proof. We only prove the “if” direction as the “only if” direction is trivial. By contradiction, suppose that i ≥ 2, red(ρi) = ε and red(ρ) 6= ε. Let us decompose red(ρ) as red(ρ) = ρ1ρ2ρ1 where ρ1 is maximal in length. Note that ρ2 6= ε since we would get red(ρ) = red(ρ1ρ1) = ε otherwise. By definition of red, it holds that red(ρi) = red(red(ρ)i) = red(ρ1ρi2ρ1) = ε. So there exists a θθ¯ factor in ρ1ρi2ρ1. Recall that ρ1ρ2ρ1 is reduced. This means this θθ¯ factor is necessarily at the junction between to consecutive ρ2. Formally, we get that ρ2 = θ¯ρ3θ for some ρ3. Hence, red(ρ) = ρ1θ¯ρ3θρ1, which contradicts the maximality of ρ1. J I Corollary 22. For every n-block ρ and stack s such that ρ is iterable on s, if ρ(s) 6= s then the infinite sequence s, ρ(s), ρ2(s), . . . , ρi(s), . . . contains no repetition. Proof. If the sequence s, ρ(s), ρ2(s), . . . , ρi(s), . . . contains a repetition, then there is a stack t satisfying ρj(t) = t for some j ≥ 1. This entails, by Lemmas 12 and 21, that red(ρ) = ε, hence, ρ(s) = s. J I Theorem 16. If the reachability tree of a d-dim n-PVASS S contains two nodes u and v such that u subsumes v (resp., u strictly subsumes v), then S has an infinite initialised run (resp., an infinite reachability set). Proof. We have v : (q, x, s) and v0 : (q, x + y, ρ(s)), where y is a componentwise nonnegative vector, and ρ is the n-block on the run from v to v0. As x ≤ x + y, we know that vectorwise, the run from v to v0 is applicable to v0 (by monotony). As ρ is iterable on s, we know that stackwise, the run from v to v0 is applicable to v0. Thus we can apply the run from v to v0 on v0, and obtain a new node z : (q, x + 2y, ρ2(s)), and iterate the process to obtain an infinite sequence of nodes vi : (q, x + iy, ρi(s)). We therefore have an infinite run in S. If furthermore v strictly subsumes v0, then y 6= 0 or ρ(s) 6= s, and we get that all these nodes are labelled by distinct configurations (this claim is obvious if y 6= 0 and comes from Corollary 22 if ρ(s) 6= s). Thus S can reach infinitely many configurations and is thus unbounded. J Non-completeness of the test. We detail here why the only run of the HOPVASS of Figure 2 does not contain any iterable subrun. One can show that the only possible configurations containing state B are of the form (B, 1, [[ban]1]2). Moreover, the only run moving from (B, 1, [[ban]1]2) to (B, 1, [[ban+1]1]2) passes through the states C, D, E and performs the stack operations sequence ρ = pushacopy2popan+1popbpushbpopan+1copy2. red1(ρ) = popanpopbpushbpushan+1, and one can see it cannot be written as ρE1 ρI1 ρE1 with ρI1 containing no pop operation. Intuitively, this run adds an a at the top of the stack, copies the stack, and pops it until the b on the bottom before reconstructing it. As it needs to go to the bottommost symbol while adding a new symbol on top, it cannot be applied a second time, as it doesn’t go deep enough anymore. Similarly, all possible configurations containing state C are of the form (C, 0, [[ban]1]2), and the same reasoning applies. Configurations containing D are of the form (D, k, [[ban]1[ban−k]1]2). The run going from (D, k, [[ban]1[ban−k]1]2) to (D, k0, [[ban+1]1[ban+1−k0 ]1]2) passes through E, B, C, and performs the stack operations sequence ρ = popkapopbpushbpushancopy2pushacopy2popka0 . It is easy to see that either red1(ρ) is not iterable, or red2(ρ) is not iterable (depending on k and k0). The run going from (D, k, [[ban]1[ban−k]1]2) to (D, k0, [[ban]1[ban−k0 ]1]2) stays on D and performs the stack operations sequence popka−k0 , which is not iterable. Configuration containing E are of the form (E, k, [[ban]1[ban−k]1]2). The run going from (E, k, [[ban]1[ban−k]1]2) to (E, k0, [[ban+1]1[ban+1−k0 ]1]2) is similar to the previous case. The run going from (E, k, [[ban]1[ban−k]1]2) to (E, k0, [[ban]1[ban−k0 ]1]2) decreases the counter value, and thus cannot be iterated. I Lemma 17. If ρ is a block applicable to []n, then for every order k, redk(ρ) = red(ρ|k) and redk(ρ) contains no copyk (no popa if k = 1). Proof. Suppose there is a copyk in ρ|k at position j in ρ = θ1 · · · θm. As ρ is applicable to []n, there is a i < j such that θi = copyk, there is no other copyk and copyk between i and j (w.l.o.g) and θi+1 · · · θj−1(θ1 · · · θi([]n)) = θ1 · · · θi([]n). Thus, by Lemma 12, red(θi+1 · · · θj−1|k) = ε. Therefore, redk(ρ) does not contain any copyk. Furthermore, for orders lower than k, both red and redk coincide syntactically, therefore redk(ρ) is reduced for red, and by unicity of the reduced k-block, we get the result. J Alfred V. Aho . Nested Stack Automata. J. ACM , 16 ( 3 ): 383 - 406 , 1969 . doi: 10 .1145/ 321526.321529. Arnaud Carayol . Regular Sets of Higher-Order Pushdown Stacks . In Joanna Jedrzejowicz and Andrzej Szepietowski, editors, Mathematical Foundations of Computer Science 2005 , 30th International Symposium, MFCS 2005, Gdansk, Poland, August 29 - September 2 , 2005 , Proceedings, volume 3618 of Lecture Notes in Computer Science, pages 168 - 179 . Springer , 2005 . doi: 10 .1007/11549345_ 16 . Arnaud Carayol . Automates infinis, logiques et langages . PhD thesis , University of Rennes 1, France, 2006 . URL: https://tel.archives-ouvertes.fr/tel-00628513. Arnaud Carayol and Stefan Wöhrle . The Caucal Hierarchy of Infinite Graphs in Terms of Logic and Higher-Order Pushdown Automata . In Paritosh K. Pandya and Jaikumar Radhakrishnan, editors, FST TCS 2003: Foundations of Software Technology and Theoretical Computer Science, 23rd Conference , Mumbai, India, December 15-17 , 2003 , Proceedings, volume 2914 of Lecture Notes in Computer Science, pages 112 - 123 . Springer, 2003 . doi:10 .1007/978-3- 540 -24597-1_ 10 . Didier Caucal . On Infinite Terms Having a Decidable Monadic Theory . In Krzysztof Diks and Wojciech Rytter, editors, Mathematical Foundations of Computer Science 2002 , 27th International Symposium, MFCS 2002 , Warsaw, Poland, August 26-30 , 2002 , Proceedings, volume 2420 of Lecture Notes in Computer Science, pages 165 - 176 . Springer, 2002 . doi: 10 .1007/3-540-45687-2_ 13 . Alain Finkel . A Generalization of the Procedure of Karp and Miller to Well Structured Transition Systems . In Thomas Ottmann, editor, Automata, Languages and Programming , 14th International Colloquium, ICALP87, Karlsruhe, Germany, July 13-17 , 1987 , Proceedings, volume 267 of Lecture Notes in Computer Science, pages 499 - 508 . Springer, 1987 . doi:10 .1007/3-540-18088-5_ 43 . Alain Finkel and Philippe Schnoebelen . Well-structured transition systems everywhere! Theor. Comput. Sci. , 256 ( 1-2 ): 63 - 92 , 2001 . doi: 10 .1016/S0304- 3975 ( 00 ) 00102 - X . Sheila A. Greibach . Full AFLs and Nested Iterated Substitution. Information and Control , 16 ( 1 ): 7 - 35 , 1970 . doi: 10 .1016/S0019- 9958 ( 70 ) 80039 - 0 . Matthew Hague , Jonathan Kochems, and C. -H. Luke Ong . Unboundedness and downward closures of higher-order pushdown automata . In Rastislav Bodík and Rupak Majumdar , editors, Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2016 , St . Petersburg, FL, USA, January 20 - 22 , 2016 , pages 151 - 163 . ACM, 2016 . doi: 10 .1145/2837614.2837627. Sci., 3 ( 2 ): 147 - 195 , 1969 . doi: 10 .1016/S0022- 0000 ( 69 ) 80011 - 5 . Teodor Knapik , Damian Niwinski, and Pawel Urzyczyn . Higher-Order Pushdown Trees Are Easy . In Mogens Nielsen and Uffe Engberg, editors, Foundations of Software Science and Computation Structures , 5th International Conference, FOSSACS 2002 . Held as Part of the Joint European Conferences on Theory and Practice of Software , ETAPS 2002 Grenoble, France, April 8- 12 , 2002 , Proceedings, volume 2303 of Lecture Notes in Computer Science, pages 205 - 222 . Springer, 2002 . doi: 10 .1007/3-540-45931-6_ 15 . Ranko Lazic . The reachability problem for vector addition systems with a stack is not elementary . CoRR, abs/1310.1767, 2013 . Presented at RP' 12 . arXiv: 1310 . 1767 . Ranko Lazic and Patrick Totzke . What Makes Petri Nets Harder to Verify: Stack or Data? In Thomas Gibson-Robinson, Philippa J . Hopcroft , and Ranko Lazic, editors, Concurrency, Security, and Puzzles - Essays Dedicated to Andrew William Roscoe on the Occasion of His 60th Birthday , volume 10160 of Lecture Notes in Computer Science, pages 144 - 161 . Springer , 2017 . doi: 10 .1007/978-3- 319 -51046- 0 _ 8 . Jérôme Leroux , M. Praveen , and Grégoire Sutre . Hyper-Ackermannian bounds for pushdown vector addition systems . In Thomas A. Henzinger and Dale Miller, editors, Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) , CSL-LICS '14 , Vienna, Austria, July 14 - 18 , 2014 , pages 63 : 1 - 63 : 10 . ACM, 2014 . doi:10.1145/2603088 .2603146. Jérôme Leroux , Grégoire Sutre, and Patrick Totzke . On Boundedness Problems for Pushdown Vector Addition Systems . In Mikolaj Bojanczyk, Slawomir Lasota, and Igor Potapov, editors, Reachability Problems - 9th International Workshop, RP 2015, Warsaw, Poland, September 21-23 , 2015 , Proceedings, volume 9328 of Lecture Notes in Computer Science, pages 101 - 113 . Springer, 2015 . doi: 10 .1007/978-3- 319 -24537-9_ 10 . Jérôme Leroux , Grégoire Sutre, and Patrick Totzke . On the Coverability Problem for Pushdown Vector Addition Systems in One Dimension . In Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015 , Kyoto, Japan, July 6- 10 , 2015 , Proceedings, Part II , volume 9135 of Lecture Notes in Computer Science, pages 324 - 336 . Springer, 2015 . doi: 10 .1007/978-3- 662 -47666-6_ 26 . Richard J. Lipton. The reachability problem requires exponential space . Technical Report 63 , Yale University, January 1976 . A.N. Maslov . Multilevel Stack Automata. Probl. Inf. Transm., 12 ( 1 ): 38 - 43 , 1976 . Ernst W. Mayr and Albert R. Meyer . The Complexity of the Finite Containment Problem for Petri Nets . J. ACM , 28 ( 3 ): 561 - 576 , 1981 . doi: 10 .1145/322261.322271. Ken McAloon . Petri nets and large finite sets . Theor. Comput. Sci. , 32 : 173 - 183 , 1984 . doi:10 .1016/ 0304 - 3975 ( 84 ) 90029 - X . Pawel Parys . A Pumping Lemma for Pushdown Graphs of Any Level . In Christoph Dürr and Thomas Wilke, editors, 29th International Symposium on Theoretical Aspects of Computer Science, STACS 2012, February 29th - March 3rd , 2012 , Paris, France, volume 14 of LIPIcs , pages 54 - 65 . Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2012 . doi: 10 .4230/LIPIcs.STACS. 2012 . 54 . Theor. Comput. Sci. , 6 : 223 - 231 , 1978 . doi: 10 .1016/ 0304 - 3975 ( 78 ) 90036 - 1 .


This is a preview of a remote PDF: http://drops.dagstuhl.de/opus/volltexte/2018/9943/pdf/LIPIcs-FSTTCS-2018-44.pdf

Vincent Penelle, Sylvain Salvati, Gr\'egoire Sutre. On the Boundedness Problem for Higher-Order Pushdown Vector Addition Systems, LIPICS - Leibniz International Proceedings in Informatics, 2018, 44:1-44:20, DOI: 10.4230/LIPIcs.FSTTCS.2018.44