Minimizing GFG Transition-Based Automata

LIPICS - Leibniz International Proceedings in Informatics, Jul 2019

While many applications of automata in formal methods can use nondeterministic automata, some applications, most notably synthesis, need deterministic or good-for-games automata. The latter are nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past. The minimization problem for nondeterministic and deterministic B�chi and co-B�chi word automata are PSPACE-complete and NP-complete, respectively. We describe a polynomial minimization algorithm for good-for-games co-B�chi word automata with transition-based acceptance. Thus, a run is accepting if it traverses a set of designated transitions only finitely often. Our algorithm is based on a sequence of transformations we apply to the automaton, on top of which a minimal quotient automaton is defined.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://drops.dagstuhl.de/opus/volltexte/2019/10676/pdf/LIPIcs-ICALP-2019-100.pdf

Minimizing GFG Transition-Based Automata

I C A L P Minimizing GFG Transition-Based Automata Orna Kupferman 0 1 2 0 Bader Abu Radi School of Computer Science and Engineering, The Hebrew University , Jerusalem , Israel 1 Category Track B: Automata , Logic, Semantics, and Theory of Programming 2 School of Computer Science and Engineering, The Hebrew University , Jerusalem , Israel While many applications of automata in formal methods can use nondeterministic automata, some applications, most notably synthesis, need deterministic or good-for-games automata. The latter are nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past. The minimization problem for nondeterministic and deterministic B?chi and co-B?chi word automata are PSPACE-complete and NP-complete, respectively. We describe a polynomial minimization algorithm for good-for-games co-B?chi word automata with transition-based acceptance. Thus, a run is accepting if it traverses a set of designated transitions only finitely often. Our algorithm is based on a sequence of transformations we apply to the automaton, on top of which a minimal quotient automaton is defined. 2012 ACM Subject Classification Theory of computation ? Formal languages and automata theory; Theory of computation ? Automata over infinite objects Automata theory is one of the longest established areas in Computer Science. A classical problem in automata theory is minimization: generation of an equivalent automaton with a minimal number of states. For automata on finite words, the picture is well understood: For nondeterministic automata, minimization is PSPACE-complete [16], whereas for deterministic automata, a minimization algorithm, based on the Myhill-Nerode right congruence [28, 29], generates in polynomial time a canonical minimal deterministic automaton [14]. Essentially, the canonical automaton, a.k.a. the quotient automaton, is obtained by merging equivalent states. A prime application of automata theory is specification, verification, and synthesis of reactive systems [36, 8]. The automata-theoretic approach considers relationships between systems and their specifications as relationships between languages. Since we care about the on-going behavior of nonterminating systems, the automata run on infinite words. Acceptance in such automata is determined according to the set of states that are visited infinitely often along the run. In B?chi automata [5] (NBW and DBW, for nondeterministic and deterministic B?chi word automata, respectively), the acceptance condition is a subset ? of states, and a run is accepting iff it visits ? infinitely often. Dually, in co-B?chi automata (NCW and DCW), a run is accepting iff it visits ? only finitely often. In spite of the extensive use of automata on infinite words in verification and synthesis algorithms and tools, some fundamental problems around their minimization are still open. For nondeterministic automata, minimization is PSPACE-complete, as it is for automata on finite words. Before and phrases Minimization; Deterministic co-B?chi Automata Introduction 100:2 we describe the situation for deterministic automata, let us elaborate some more on the power of nondeterminism in the context of automata on infinite words, as this would be relevant to our contribution. For automata on finite words, nondeterminism does not increase the expressive power, yet it leads to an exponential succinctness [31]. For automata on infinite words, nondeterminism may increase the expressive power and also leads to an exponential succinctness. For example, NBWs are strictly more expressive than DBWs [21]. In some applications of automata on infinite words, such as model checking, algorithms can proceed with nondeterministic automata, whereas in other applications, such as synthesis and control, they cannot. There, the advantages of nondeterminism are lost, and the algorithms involve complicated determinization constructions [32] or acrobatics for circumventing determinization [20]. Essentially, the inherent difficulty of using nondeterminism in synthesis lies in the fact that each guess of the nondeterministic automaton should accommodate all possible futures. The study of nondeterministic automata that can resolve their nondeterministic choices in a way that only depends on the past and still accept all words in the language started already in 1996 [19], where the setting is modeled by means of tree automata for derived languages. It then continued by means of good for games (GFG) automata, introduced in [13].1 Formally, a nondeterministic automaton A over an alphabet ? is GFG if there is a strategy g that maps each finite word u ? ?? to the transition to be taken after u is read; and following g results in accepting all the words in the language of A. Note that a state q of A may be reachable via different words, and g may suggest different transitions from q after different words are read. Still, g depends only on the past, namely on the word read so far. Obviously, there exist GFG automata: deterministic ones, or nondeterministic ones that are determinizable by pruning (DBP); that is, ones that just add transitions on top of a deterministic automaton. In fact, the GFG automata constructed in [13] are DBP.2 In terms of expressive power, it is shown in [19, 30] that GFG automata with an acceptance condition ? (e.g., B?chi) are as expressive as deterministic ? automata. The picture in terms of succinctness is diverse. For automata on finite words, GFG automata are always DBP [19, 26]. For automata on infinite words, in particular NBWs and NCWs, GFG automata need not be DBP [ 3 ]. Moreover, the best known determinization construction for GFG-NBWs is quadratic, whereas determinization of GFG-NCWs has an exponential blow-up lower bound [17]. Thus, in terms of succinctness, GFG automata on infinite words are more succinct (possibly even exponentially) than deterministic ones. Further research studies characterization, typeness, complementation, and further constructions and decision procedures for GFG automata [17, 4, 2]. Back to the minimization problem. Recall that for finite words, an equivalent minimal deterministic automaton can be obtained by merging equivalent states. A similar algorithm is valid for determinisitic weak automata on infinite words: DBWs in which each strongly connected component is either contained in ? or is disjoint form ? [27, 23]. For general DBWs (and hence, also DCWs, as the two dualize each other), merging of equivalent states fails, and minimization is NP-complete [33]. The intractability of the minimization problem has led to a development of numerous heuristics. The heuristics either relax the minimality requirement, for example algorithms based on fair bisimulation [10], which reduce the state space but need not return a minimal 1 GFGness is also used in [6] in the framework of cost functions under the name ?history-determinism?. 2 As explained in [13], the fact that the GFG automata constructed there are DBP does not contradict their usefulness in practice, as their transition relation is simpler than the one of the embodied deterministic automaton and it can be defined symbolically. automaton, or relax the equivalence requirement, for example algorithms based on hyperminimization [1, 15] or almost-equivalence [33], which come with a guarantee about the difference between the language of the original automaton and the ones generated by the algorithm. In some cases, these algorithms do generate of a minimal equivalent automaton (in particular, applying relative minimization based on almost equivalence on a deterministic weak automaton results in an equivalent minimal weak automaton [33]), but in general, they are only heuristics. In an orthogonal line of work, researchers have studied minimization in richer settings of automata on finite words. One direction is to allow some nondeterminism. As it turns out, however, even the slightest extension of the deterministic model towards a nondeterministic one, for example by allowing at most one nondeterministic choice in every accepting computation or allowing just two initial states instead of one, results in NP-complete minimization problems [24]. Another direction is a study of quantitative settings. Here, the picture is diverse. For example, minimization of deterministic lattice automata [18] is polynomial for automata over linear lattices and is NP-complete for general lattices [11], and minimization of deterministic weighted automata over the tropical semiring is polynomial [25], yet the problem is open for general semirings. Proving NP-hardness for DBW minimization, Schewe used a reduction from the vertexcover problem [33]. Essentially3, given a graph G = hV, Ei, we seek a minimal DBW for the language LG of words of the form vi+1 ? vi+2 ? vi+3 ? ? ? ? V ?, where for all j ? 1, we have that hvij , vij+1 i ? E. We can recognize LG by an automaton obtained from G by adding self loops to all vertices, labelling each edge by its destination, and requiring a run to traverse infinitely many original edges of G. Indeed, such runs correspond to words that traverse an infinite path in G, possibly looping at vertices, but not getting trapped in a self loop, as required by LG. When, however, the acceptance condition is defined by a set of vertices, rather than edges, we need to duplicate some states, and a minimal duplication corresponds to a minimal vertex cover. Thus, a natural question arises: Is there a polynomial minimization algorithms for DBWs and DCWs whose acceptance condition is transition based? Beyond the theoretical interest, there is recently growing use of transition-based automata in practical applications, with evidences they offer a simpler translation of LTL formulas to automata and enable simpler constructions and decision procedures [9, 7, 34, 22]. In this paper we present a significant step towards a positive answer to this question and describe a polynomial-time algorithm for the minimization of GFG transition-based NCWs. Consider a GFG-NCW A. Our algorithm is based on a chain of transformations we apply to A. Some of the transformations are introduced in [17], in algorithms for deciding GFGness. We add two more transformations and prove that they guarantee minimality. Our reasoning is based on a careful analysis of the safe components of A, namely the components obtained by removing transitions in ?. We show that a minimal GFG-NCW equivalent to A can be obtained by defining an order on the safe components, and applying the quotient construction on a GFG-NCW obtained by restricting attention to states that belong to components that form a frontier in this order. The paper is organized as follows. In Section 2, we define GFG-NCWs and some properties of GFG-NCWs that can be attained in polynomial time using existing results. In Section 3, we describe two additional properties and prove that they guarantee minimality. Then, in Sections 4 ? 5, we show how the two properties can be attained in polynomial time, thus concluding our minimization procedure. In Section 6, we discuss how our results contribute to the quest for efficient DBW and DCW minimization. 3 The exact reduction is more complicated and involves an additional letter that is required for cases in which vertices in the graph have similar neighbours. 2 Preliminaries For a finite nonempty alphabet ?, an infinite word w = ?1 ? ?2 ? ? ? ? ?? is an infinite sequence of letters from ?. A language L ? ?? is a set of words. We denote the empty word by , the set of finite words over ? by ??. For i ? 0, we use w[1, i] to denote the (possibly empty) prefix ?1 ? ?2 ? ? ? ?i of w and use w[i + 1, ?] to denote its suffix ?i+1 ? ?i+2 ? ? ? . A nondeterministic automaton over infinite words is A = h?, Q, q0, ?, ?i, where ? is an alphabet, Q is a finite set of states, q0 ? Q is an initial state, ? : Q ? ? ? 2Q \ ? is a transition function, and ? is an acceptance condition, to be defined below. For states q and s and a letter ? ? ?, we say that s is a ?-successor of q if s ? ?(q, ?). The size of A, denoted |A|, is defined as its number of states, thus, |A| = |Q|. Note that A is total, in the sense that it has at least one successor for each state and letter, and that A may be nondeterministic, as the transition function may specify several successors for each state and letter. If |?(q, ?)| = 1 for every state q ? Q and letter ? ? ?, then A is deterministic. When A runs on an input word, it starts in the initial state and proceeds according to the transition function. Formally, a run of A on w = ?1 ? ?2 ? ? ? ? ?? is an infinite sequence of states r = r0, r1, r2, . . . ? Q?, such that r0 = q0, and for all i ? 0, we have that ri+1 ? ?(ri, ?i+1). We sometimes extend ? to sets of states and finite words. Then, ? : 2Q ? ?? ? 2Q is such that for every S ? 2Q, finite word u ? ??, and letter ? ? ?, we have that ?(S, ) = S, ?(S, ?) = Ss?S ?(s, ?), and ?(S, u ? ?) = ?(?(S, u), ?). Thus, ?(S, u) is the set of states that A may reach when it reads u from some state in S. The transition function ? induces a transition relation ? ? Q ? ? ? Q, where for every two states q, s ? Q and letter ? ? ?, we have that hq, ?, si ? ? iff s ? ?(q, ?). We sometimes view the run r = r0, r1, r2, . . . on w = ?1 ? ?2 ? ? ? as an infinite sequence of successive transitions hr0, ?1, r1i, hr1, ?2, r2i, . . . ? ??. The acceptance condition ? determines which runs are ?good?. We consider here transition-based automata, in which ? refers to the set of transitions that are traversed infinitely often during the run; specifically, ? ? ?. We use the terms ?-transitions and ??-transitions to refer to transitions in ? and in ? \ ?, respectively. We also refer to restrictions ?? and ??? of ?, where for all q, s ? Q and ? ? ?, we have that s ? ??(q, ?) iff hq, ?, si ? ?, and s ? ??? (q, ?) iff hq, ?, si ? ? \ ?. For a run r ? ??, let inf (r) ? ? be the set of transitions that r traverses infinitely often. Thus, inf (r) = {hq, ?, si ? ? : q = ri, ? = ?i+1 and s = ri+1 for infinitely many i?s}. In co-B?chi automata, a run r is accepting iff inf (r) ? ? = ?, thus if r traverses transitions in ? only finitely often. A run that is not accepting is rejecting. A word w is accepted by A if there is an accepting run of A on w. The language of A, denoted L(A), is the set of words that A accepts. Two automata are equivalent if their languages are equivalent. We use tNCW and tDCW to abbreviate nondeterministic and deterministic transition-based co-B?chi automata over infinite words, respectively. For a state q ? Q of an automaton A = h?, Q, q0, ?, ?i, we define Aq to be the automaton obtained from A by setting the initial state to be q. Thus, Aq = h?, Q, q, ?, ?i. We say that two states q, s ? Q are equivalent, denoted q ?A s, if L(Aq) = L(As). The automaton A is semantically deterministic if different nondeterministic choices lead to equivalent states. Thus, for every state q ? Q and letter ? ? ?, all the ?-successors of q are equivalent: for every two states s, s0 ? Q such that hq, ?, si and hq, ?, s0i are in ?, we have that s ?A s0. The following proposition follows immediately from the definitions. I Proposition 1. Consider a semantically deterministic automaton A, states q, s ? Q, letter ? ? ?, and transitions hq, ?, q0i, hs, ?, s0i ? ?. If q ?A s, then q0 ?A s0. A tNCW A is safe deterministic if by removing its ?-transitions, we get a (possibly not total) deterministic automaton. Thus, A is safe deterministic if for every state q ? Q and letter ? ? ?, it holds that |??? (q, ?)| ? 1. We refer to the components we get by removing A?s ?-transitions as the safe components of A, and we denote the set of safe components of A by S(A). For a safe component S ? S(A), the size of S, denoted |S|, is the number of states in S. Note that an accepting run of A eventually gets trapped in one of A?s safe components. An automaton A is good for games (GFG, for short) if its nondeterminism can be resolved based on the past, thus on the prefix of the input word read so far. Formally, A is GFG if there exists a strategy f : ?? ? Q such that the following holds: 1. The strategy f is consistent with the transition function. That is, for every finite word u ? ?? and letter ? ? ?, we have that hf (u), ?, f (u ? ?)i ? ?. 2. Following f causes A to accept all the words in its language. That is, for every infinite word w = ?1 ? ?2 ? ? ? ? ??, if w ? L(A), then the run f (w[1, 0]), f (w[1, 1]), f (w[1, 2]), . . ., which we denote by f (w), is accepting. We say that the strategy f witnesses A?s GFGness. For an automaton A, we say that a state q of A is GFG if Aq is GFG. Finally, we say that a GFG-tNCW A is minimal if for every equivalent GFG-tNCW B, it holds that |A| ? |B|. Consider a directed graph G = hV, Ei. A strongly connected set in G (SCS, for short) is a set C ? V such that for every two vertices v, v0 ? C, there is a path from v to v0. A SCS is maximal if it is maximal w.r.t containment, that is, for every non-empty set C0 ? V \ C, it holds that C ? C0 is not a SCS. The maximal strongly connected sets are also termed strongly connected components (SCCs, for short). The SCC graph of G is the graph defined over the SCCs of G, where there is an edge from a SCC C to another SCC C0 iff there are two vertices v ? C and v0 ? C0 with hv, v0i ? E. A SCC is ergodic iff it has no outgoing edges in the SCC graph. The SCC graph of G can be computed in linear time by standard SCC algorithms [35]. An automaton A = h?, Q, q0, ?, ?i induces a directed graph GA = hQ, Ei, where hq, q0i ? E iff there is a letter ? ? ? such that hq, ?, q0i ? ?. The SCSs and SCCs of A are those of GA. We say that a tNCW A is normal if all the safe components of A are SCSs. That is, for all states q and s of A, if there is a path of ??-transition from q to s, then there is also a path of ??-transition from s to q. We now combine several properties defined above and say that a GFG-tNCW A is nice if all the states in A are reachable and GFG, and A is normal, safe deterministic, and semantically deterministic. In the theorem below we combine arguments from [17] showing that each of these properties can be obtained in at most polynomial time, and without the properties being conflicting. For some properties, we give an alternative and simpler proof. I Theorem 2. [17] Every GFG-tNCW A can be turned, in polynomial time, into an equivalent nice GFG-tNCW B such that |B| ? |A|. Proof. It is shown in [17] that one can decide the GFGness of a tNCW A in polynomial time. The proof goes through an intermediate step where the authors construct a two-players game such that if the first player does not win the game, then A is not GFG, and otherwise a winning strategy for him induces a safe-deterministic GFG-tNCW B equivalent to A. As we start with a GFG-tNCW A, such a winning strategy is guaranteed to exist, and we obtain an equivalent safe-deterministic GFG-tNCW B in polynomial time. In fact, it can be shown that B is also semantically deterministic. Yet, for completeness we give below a general procedure for semantic determinization. For a tNCW A, we say that a transition hq, ?, si ? ? is covering if for every transition hq, ?, s0i, it holds that L(As0 ) ? L(As). If A is GFG and f is a strategy witnessing its GFGness, we say that a state q of A is used by f if there is a finite word u with f (u) = q, and we say that a transition hq, ?, q0i of A is used by f if there is a finite word u with f (u) = q and f (u?) = q0. Since states that are not GFG can be detected in polynomial time, and as all states that are used by a strategy that witnesses B?s GFGness are GFG, the removal of non-GFG states does not affect B?s language. Note that removing the non-GFG states may result in a non-total automaton, in which case we add a rejecting sink. Now, using the fact that language containment of GFG-tNCWs can be checked in polynomial time [12, 17], and transitions that are used by strategies are covering [17], one can semantically determinize B by removing non-covering transitions. States that are not reachable are easy to detect, and their removal does not affect B?s language. Normalization is also easy to obtain and involves adding some existing transitions to ? [17]. Indeed, if the safe components of B are not SCSs, then every ??-transition connecting different SCCs of B?s safe components can be added to ? without affecting the acceptance of runs in B, as every accepting run traverses such transitions only finitely often. Thus, the language and GFGness of all states are not affected. Finally, it is not hard to verify that the properties, in the order we obtain them in the proof, are not conflicting, and thus the described sequence of transformations results in a nice GFG-tNCW. J 3 A Sufficient Condition for GFG-tNCW Minimality In this section, we define two additional properties for nice GFG-tNCWs, namely safecentralized and safe-minimal, and we prove that nice GFG-tNCWs that attain these properties are minimal. In Sections 4 ? 5, we are going to show that the two properties can be attained in polynomial time. Before we start, let us note that a GFG-tNCW may be nice and still not be minimal. A simple example is a GFG-tNCW Afm for the language (a + b)? ? a? that has two states, both with a ??-self-loop labeled a and an ?-transition labeled b to the other state. It is easy to see that Afm is nice but not minimal. Consider a tNCW A = h?, Q, q0, ?, ?i. A run r of A is safe if it does not traverse ?-transitions. The safe language of A, denoted Lsafe(A), is the set of infinite words w, such that there is a safe run of A on w. Recall that two states q, s ? Q are equivalent (q ?A s) if L(Aq) = L(As). Then, q and s are strongly-equivalent, denoted q ?A s, if q ?A s and Lsafe(Aq) = Lsafe(As). Finally, q is subsafe-equivalent to s, denoted q -A s, if q ?A s and Lsafe(Aq) ? Lsafe(As). Note that the three relations are transitive. When A is clear from the context, we omit it from the notations, thus write Lsafe(q), q - s, etc. The tNCW A is safe-minimal if it has no strongly-equivalent states. Then, A is safe-centralized if for every two states q, s ? Q, if q - s, then q and s are in the same safe component of A. I Example 3. The nice GFG-tNCW Afm described above is neither safe-minimal (its two states are strongly-equivalent) nor safe-centralized (its two states are in different safe components). As another example, consider the tDCW A appearing in Figure 1. The dashed transitions are ?-transitions. All the states of A are equivalent, yet they all differ in their safe language. Accordingly, A is safe-minimal. Since a? = Lsafe(Aq2 ) ? Lsafe(Aq0 ), we have that q2 - q0. Hence, as q0 and q2 are in different safe components, the tDCW A is not safe-centralized. I Proposition 4. Consider a nice GFG-tNCW A and states q and s of A such that q ? s (q - s). For every letter ? ? ? and ??-transition hq, ?, q0i, there is an ??-transition hs, ?, s0i such that q0 ? s0 (q0 - s0, respectively). b c q2 a b a, b q1 Figure 1 The tDCW A. Proof. We prove the proposition for the case q ? s. The case q - s is similar. Since A is normal, the existence of the ??-transition hq, ?, q0i implies that there is a safe run from q0 back to q. Hence, there is a word z ? Lsafe(Aq0 ). Clearly, ? ? z is in Lsafe(Aq). Now, since q ? s, we have that Lsafe(Aq) = Lsafe(As). In particular, ? ? z ? Lsafe(As), and thus there is a ??-transition hs, ?, s0i. We prove that q0 ? s0. Since L(Aq) = L(As) and A is semantically deterministic, then, by Proposition 1, we have that L(Aq0 ) = L(As0 ). It is left to prove that Lsafe(Aq0 ) = Lsafe(As0 ). We prove that Lsafe(Aq0 ) ? Lsafe(As0 ). The second direction is similar. Since A is safe deterministic, the transition hs, ?, s0i is the only ?-labeled ??-transition from s. Hence, if by contradiction there is a word z ? Lsafe(Aq0 ) \ Lsafe(As0 ), we get that ? ? z ? Lsafe(Aq) \ Lsafe(As), contradicting the fact that Lsafe(Aq) = Lsafe(As). J We continue with propositions that relate two automata, A = h?, QA, qA0, ?A, ?Ai and B = h?, QB, qB0, ?B, ?Bi. We assume that QA and QB are disjoint, and extend the ?, ?, and w-eruelsaetiqon?s stotostiantdesicianteQtAha?t QL(BAiqn)t=heLe(xBpse)c.ted way. For example, for q ? QA and s ? QB, I Proposition 5. Let A and B be equivalent nice GFG-tNCWs. For every state q ? QA, there is a state s ? QB such that q - s. Proof. Let g be a strategy witnessing B?s GFGness. Consider a state q ? QA. Let u ? ?? be such that q ? ?A(qA0, u). Since A and B are equivalent and semantically deterministic, an iterative application of Proposition 1 implies that for every state q0 ? ?B(qB0, u), we have q ? q0. In particular, q ? g(u). If Lsafe(Aq) = ?, then we are done, as Lsafe(Aq) ? Lsafe(Bg(u)). If Lsafe(Aq) 6= ?, then the proof proceeds as follows. Assume by way of contradiction that for every state s ? QB that is equivalent to q, it holds that Lsafe(Aq) 6? Lsafe(Bs). We define an infinite word z such that A accepts u ? z, yet g(u ? z) is a rejecting run of B. Since A and B are equivalent, this contradicts the fact that g witnesses B?s GFGness. We define z as follows. Let s0 = g(u). Since Lsafe(Aq) 6? Lsafe(Bs0 ), there is a finite nonempty word z1 such that there is a safe run of Aq on z1, but every run of Bs0 on z1 is not safe. In particular, the run of Bs0 that is induced by g, namely g(u), g(u ? z1[1, 1]), g(u ? z1[1, 2]), . . . , g(u ? z1), traverses an ?-transition. Since A is normal, we can define z1 so the safe run of Aq on z1 ends in q. Let s1 = g(u ? z1). We have so far two finite runs: q ?z?1 q and s0 ?z?1 s1, where the first run is safe, and the second is not. Now, since q ? s0, then again by Proposition 1 we have that q ? s1, and by applying the same considerations, we can define a finite nonempty word z2 and s2 = g(u ? z1 ? z2) such that q ?z?2 q and s1 ?z?2 s2, where the first run is safe, and the second is not. After at most |QB| iterations, we get that there are 0 ? j1 < j2 ? |QB| such that sj1 = sj2 , and define z = z1 ? ? ? z2 ? ? ? zj1 ? (zj1+1 ? ? ? zj2 )?. Since j1 < j2, the extension zj1+1 ? ? ? zj2 is nonempty and thus z is infinite. On the one hand, since q ? ?A(qA0, u) and there is a safe run of Aq on z, we have that u ? z ? L(A). On the other hand, the run g(u ? z) traverses an ?-transitions infinitely often, and is thus rejecting. J I Proposition 6. Let A and B be equivalent nice GFG-tNCWs. For every state p ? QA, there are states q ? QA and s ? QB such that p - q and q ? s. Proof. The proposition follows from the combination of Proposition 5 with the transitivity of - and the fact QA and QB are finite. Formally, consider the directed bipartite graph G = hQA ? QB, Ei, where E ? (QA ? QB) ? (QB ? QA) is such that hp1, p2i ? E iff p1 - p2. Proposition 5 implies that E is total. That is, from every state in QA there is an edge to some state in QB, and from every state in QB there is an edge to some state in QA. Since QA and QB are finite, this implies that for every p ? QA, there is a path in G that starts in p and reaches a state q ? QA (possibly q = p) that belongs to a nonempty cycle. We take s to be some state in QB in this cycle. By the transitivity of -, we have that p - q, q - s, and s - q. The last two imply that q ? s, and we are done. J I Lemma 7. Consider a nice GFG-tNCW A. If A is safe-centralized and safe-minimal, then for every nice GFG-tNCW B equivalent to A, there is an injection ? : S(A) ? S(B) such that for every safe component T ? S(A), it holds that |T | ? |?(T )|. Proof. We define ? as follows. Consider a safe component T ? S(A). Let pT be some state in T . By Proposition 6, there are states qT ? QA and sT ? QB such that pT - qT and qT ? sT . Since A is safe-centralized, the states pT and qT are in the same safe component, thus qT ? T . We define ?(T ) to be the safe component of sT in B. We show that ? is an injection; that is, for every two safe components T1 and T2 in S(A), it holds that ?(T1) 6= ?(T2). Assume by way of contradiction that T1 and T2 are such that sT1 and sT2 , chosen as described above, are in the same safe component of B. Then, there is a safe run from sT1 to sT2 . Since sT1 ? qT1 , an iterative application of Proposition 4 implies that there is a safe run from qT1 to some state q such that q ? sT2 . Since the run from qT1 to q is safe, the states qT1 and q are in the same safe component, and so q ? T1. Since qT2 ? sT2 , then q ? qT2 . Since A is safe-centralized, the latter implies that q and qT2 are in the same safe component, and so q ? T2, and we have reached a contradiction. It is left to prove that for every safe component T ? S(A), it holds that |T | ? |?(T )|. Let T ? S(A) be a safe component of A. By the definition of ?, there are qT ? T and sT ? ?(T ) such that qT ? sT . Since A is normal, there is a safe run q0, q1, . . . qm of A that starts in qT and traverses all the states in T . Since A is safe-minimal, no two states in T are strongly equivalent. Therefore, there is a subset I ? {0, 1, . . . , m} of indices, with |I| = |T |, such that for every two different indices i1, i2 ? I, it holds that qi1 6? qi2 . By applying Proposition 4 iteratively, there is a safe run s0, s1, . . . sm of B that starts in sT and such that for every 0 ? i ? m, it holds that qi ? si. Since the run is safe, it stays in ?(T ). Then, however, for every two different indices i1, i2 ? I, we have that si1 6? si2 , and so si1 6= si2 . Hence, |?(T )| ? |I| = |T |. J We can now prove that the additional two properties imply the minimality of nice GFG-tNCWs. I Theorem 8. Consider a nice GFG-tNCW A. If A is safe-centralized and safe-minimal, then A is a minimal GFG-tNCW for L(A). Proof. Let B be a GFG-tNCW equivalent to A. By Theorem 2, we can assume that B is nice. Indeed, otherwise we can make it nice without increasing its state space. Then, by Lemma 7, there is an injection ? : S(A) ? S(B) such that for every safe component T ? S(A), it holds that |T | ? |?(T )|. Hence, |A| = X |T | ? X |?(T )| ? X |T 0| = |B|. T ?S(A) T ?S(A) T 0?S(B) Indeed, the first inequality follows from the fact |T | ? |?(T )|, and the second inequality follows from the fact that ? is injective. J I Remark 9. Recall that we assume that the transition function of GFG-tNCWs is total. Clearly, a non-total GFG-tNCW can be made total by adding a rejecting sink. One may wonder whether the additional state that this process involves interferes with our minimality proof. The answer is negative: if B in Theorem 8 is not total, then, by Proposition 5, A has a state s such that qrej - s, where qrej is a rejecting sink we need to add to B if we want to make it total. Thus, L(As) = ?, and we may not count it if we allow GFG-tNCWs without a total transition function. 4 Safe Centralization Consider a nice GFG-tNCW A = h?, QA, qA0, ?A, ?Ai. Recall that A is safe-centralized if for every two states q, s ? QA, if q - s, then q and s are in the same safe component. In this section we describe how to turn a given nice GFG-tNCW into a nice safe-centralized GFG-tNCW. The resulted tNCW is also going to be ?-homogenous: for every state q ? QA and letter ? ? ?, either ?A?(q, ?) = ? or ??? (q, ?) = ?. A Let H ? S(A) ? S(A) be such that for all safe components S, S0 ? S(A), we have that H(S, S0) iff there exist states q ? S and q0 ? S0 such that q - q0. That is, when S 6= S0, then the states q and q0 witness that A is not safe-centralized. Recall that q - q0 iff L(Aq) = L(Aq0 ) and Lsafe(Aq) ? Lsafe(Aq0 ). Since language containment for GFG-tNCWs can be checked in polynomial time [12, 17], the first condition can be checked in polynomial time. Since A is safe deterministic, the second condition reduces to language containment between deterministic automata and can also be checked in polynomial time. Hence, the relation H can be computed in polynomial time. I Lemma 10. Consider safe components S, S0 ? S(A) such that H(S, S0). Then, for every p ? S there is p0 ? S0 such that p - p0. Proof. Since H(S, S0), then, by definition, there are states q ? S and q0 ? S0 such that q - q0. Let p be a state in S. Since A is normal, there is a safe run from q to p in S. Since q - q0, an iterative application of Proposition 4 implies that there is a safe run from q0 to some state p0 in S0 for which p - p0, and we are done. J I Lemma 11. The relation H is transitive: for every safe components S, S0, S00 ? S(A), if H(S, S0) and H(S0, S00), then H(S, S00). Proof. Let S, S0, S00 ? S(A) be safe components of A such that H(S, S0) and H(S0, S00). Since, H(S, S0), there are states q ? S and q0 ? S0 such that q - q0. Now, since H(S0, S00), we get by Lemma 10 that that for all states in S0, in particular for q0, there is a state q00 ? S00 such that q0 - q00. The transitivity of - then implies that q - q00, and so H(S, S00). J We say that a set S ? S(A) is a frontier of A if for every safe component S ? S(A), there is a safe component S0 ? S with H(S, S0), and for all safe components S, S0 ? S such that S 6= S0, we have that ?H(S, S0) and ?H(S0, S). Once H is calculated, a frontier of A can be found in linear time. For example, as H is transitive, we can take one vertex from each ergodic SCC in the graph hS(A), Hi. Note that all frontiers of A are of the same size, namely the number of ergodic SCCs in this graph. Given a frontier S of A, we define the automaton BS = h?, QS , qS0 , ?S , ?S i, where TQhSe=ini{tqia?lsQtaAte:qqS0 ?isSchfoosresnomsuechSt?haSt }q,S0a?ndA tqhA0e. oStpheecrificcoamllyp,oinfeqnA0ts?aQreS d,ewfienetadkaesqSf0ol=lowqA0s.. Otherwise, by Lemma 10 and the definition of S, there is a state q0 ? QS such that qA0 - q0, and we take q0 = q0. The transitions in BS are either ??-transitions of A, or ?-transitions S that we add among the safe components in S in a way that preserves language equivalence. Formally, consider a state q ? QS and a letter ? ? ?. If ?A??(q, ?) 6= ?, then ?S?? (q, ?) = ?A??(q, ?) and ?S?(q, ?) = ?. If ?A??(q, ?) = ?, then ?S?? (q, ?) = ? and ?S?(q, ?) = {q0 ? QS : there is q00 ? ?A?(q, ?) such that q0 ?A q00}. Note that BS is ?-homogenous. I Example 12. Consider the tDCW A appearing in Figure 1. Recall that the dashed transitions are ?-transitions. Since A is normal and deterministic, it is nice. By removing the ?-transitions of A, we get the safe components described in in Figure 2. Since q2 - q0, we have that A has a single frontier S = {{q0, q1}}. The automaton BS appears in Figure 3. As all the states of A are equivalent, we direct a ?-labeled ?-transition to q0 and to q1, for every state with no ?-labeled transition in S. a q0 q1 b c q2 a a q0 c c b c a, b a, b q1 Figure 2 The safe components of A. Figure 3 The tNCW B{{q0,q1}}. We extend Proposition 1 to the setting of A and BS : I Proposition 13. Consider states q and s of A and BS , respectively, a letter ? ? ?, and transitions hq, ?, q0i and hs, ?, s0i of A and BS , respectively. If q ?A s, then q0 ?A s0. Proof. If hs, ?, s0i is an ??-transition of BS , then, by the definition of ?S , it is also an ??transition of A. Hence, since q ?A s and A is nice, in particular, semantically deterministic, we get by Proposition 1 that q0 ?A s0. If hs, ?, s0i is an ?-transition of BS , then, by the definition of ?S , there is some s00 ? ?A(s, ?) with s0 ?A s00. Again, since q ?A s and A is semantically deterministic, we have by Proposition 1 that s00 ?A q0, and thus s0 ?A q0. J I Proposition 14. Let q and s be states of A and BS , respectively, with q ?A s. It holds that BS q s is a GFG-tNCW equivalent to A . Proof. We first prove that L(BSs ) ? L(Aq). Consider a word w = ?1?2 . . . ? L(BSs ). Let s on w. Then, there is i ? 0 such that si, si+1, . . . is a s0, s1, s2, . . . be an accepting run of BS safe run of BSsi on the suffix w[i + 1, ?]. Let q0, q1, . . . qi be a run of Aq on the prefix w[1, i]. Since q0 ?A s0, we get, by an iterative application of Proposition 13, that qi ?A si. In si on the suffix w[i + 1, ?] is safe, it is also a safe run of Asi . Hence, addition, as the run of BS w[i + 1, ?] ? L(Aqi ), and thus q0, q1, . . . , qi can be extended to an accepting run of Aq on w. Next, we prove that L(Aq) ? L(BSs ) and that BSs is a GFG-tNCW. We do this by defining a strategy g : ?? ? QS such that for all words w ? L(Aq), we have that g(w) is an accepting s on w. First, g( ) = s. Then, for u ? ?? and ? ? ?, we define g(u ? ?) as follows. run of BS Recall that A is nice. So, in particular, Aq is GFG. Let f be a strategy witnessing Aq?s GFGness. If ???(g(u), ?) 6= ?, then g(u ? ?) = q0 for some q0 ? ???(g(u), ?). If ???(g(u), ?) = ?, S S S then g(u ? ?) = q0 for some state q0 ? QS such that f (u ? ?) -A q0. Note that since S is a frontier, such a state q0 exists. We prove that g is consistent with ?S . In fact, we prove a stronger claim, namely for all u ? ?? and ? ? ?, we have that f (u) ?A g(u) and hg(u), ?, g(u ? ?)i ? ?S . The proof proceeds by an induction on |u|. For this induction base, as f ( ) = q, g( ) = s, and q ?A s, we are done. Given u and ?, consider a transition hg(u), ?, s0i ? ?S . Since BS is total, such a transition exists. We distinguish between two cases. If ???(g(u), ?) 6= ?, then, S as BS is ?-homogenous and safe deterministic, the state s0 is the only state in ???(g(u), ?). S Hence, by the definition of g, we have that g(u ? ?) = s0 and so hg(u), ?, g(u ? ?)i ? ?S . If ?S??(g(u), ?) = ?, we claim that g(u ? ?) ?A s0 Then, as s0 ? ?S?(g(u), ?), the definition of ?S for the case ?S??(g(u), ?) = ? implies that hg(u), ?, g(u ? ?)i ? ?S . By the induction hypothesis, we have that f (u) ?A g(u). Hence, as hf (u), ?, f (u??))i ? ?A and hg(u), ?, s0i ? ?S , we have, by Proposition 13, that f (u ? ?) ?A s0. Recall that g is defined so that f (u ? ?) -A g(u ? ?). In particular, f (u ? ?) ?A g(u ? ?). Hence, by transitivity of ?A, we have that g(u ? ?) ?A s0. In addition, by the induction hypothesis, we have that f (u) ?A g(u), and so, in both cases, Proposition 13 implies that f (u ? ?) ?A g(u ? ?). It is left to prove that for every infinite word w = ?1?2 . . . ? ??, if w ? L(Aq), then g(w) is accepting. Assume that w ? L(Aq) and consider the run f (w) of Aq on w. Since f (w) is accepting, there is i ? 0 such that f (w[1, i]), f (w[1, i + 1]) . . . is a safe run of Af(w[1,i]) on the suffix w[i+1, ?]. We prove that g(w) may traverse at most one ?-transition when it reads the suffix w[i + 1, ?]. Assume that there is some j ? i such that hg(w[1, j]), ?j+1, g(w[1, j + 1])i ? f?oSll.owTshetnh,e bsyafge?scodmefipnointieonnt,s wine Sha,vweethhaatvef t(hwa[1t,Lj s+af1e]()A-f(Aw[g1(,jw+[11]),)j ?+ L1]s)a.fTe(hAegr(ewf o[1r,ej,+a1]s))B=S Lsafe(BSg(w[1,j+1])), and thus w[j + 2, ?] ? Lsafe( g(w[1,j+1])). Since BS is ?-homogenous g(w[B1,Sj+1]) on w[j + 2, ?], and this is the run and safe-deterministic, there is a single run of BS that g follows. Therefore, g(w[1, j + 1]), g(w[1, j + 2]), . . . is a safe run, and we are done. J I Proposition 15. For every frontier S, the GFG-tNCW BS is nice, safe-centralized, and ?-homogenous. Proof. It is easy to see that the fact A is nice implies that BS is normal and safe deterministic. It can be shown that all the states in BS are reachable, yet anyway states that are nonreachable are easy to detect and their removal affects neither BS ?s language nor its other properties. Finally, Proposition 14 implies that all its states are GFG. To conclude that BS is nice, we prove below that it is semantically deterministic. Consider transitions hq, ?, s1i and hq, ?, s2i in ?S . We need to show that s1 ?BS s2. By the definition of ?S , there are transitions hq, ?, s01i and hq, ?, s02i in ?A for states s01 and s02 such that s1 ?A s01 and s2 ?A s02. As A is sse1m?aAntsi2c.alTlyhdene,tePrmroipnoissittiico,nw1e4hiamvpelitehsatthsa01t ?LA(Ass021,) t=huLs(bBySs1t)raannsditLiv(iAtys2o)f=?LA(,BwSse2 )g, eatntdhsaot we get that s1 ?BS s2. Thus, BS is semantically deterministic. As we noted in the definition of its transitions, BS is ?-homogenous. It is thus left to prove that BS is safe-centralized. Let q and s be states of BS such that q -BS s; that is, L(BSq ) = L(BSs ) and Lsafe(BSq ) ? Lsafe(BSs ). Let S, T ? S be the safe components of q and s, respectively. We need to show that S = T . By Proposition 14, we have that L(Aq) = L(BSq ) and L(As) = L(BSs ). As BS follows the safe components in S, we have that Lsafe(Aq) = Lsafe(BSq ) and Lsafe(As) = Lsafe(BSs ). Hence, q -A s, implying H(S, T ). Since S is a frontier, this is possible only when S = T . J I Theorem 16. Every nice GFG-tNCW can be turned in polynomial time into an equivalent nice, safe-centralized, and ?-homogenous GFG-tNCW. 5 Safe Minimization In the setting of finite words, a quotient automaton is obtained by merging equivalent states, and is guaranteed to be minimal. In the setting of co-B?chi automata, it may not be possible to define an equivalent language on top of the quotient automaton. For example, all the states in the GFG-tNCW A in Figure 1 are equivalent, and still it is impossible to define its language on top of a single-state tNCW. In this section we show that when we start with a nice, safe-centralized, and ?-homogenous GFG-tNCW B, the transition to a quotient automaton, namely merging of strongly-equivalent states, is well defined and results in a GFG-tNCW equivalent to B that attains all the helpful properties of B, and is also safe minimal4. By Theorem 8, it is also minimal. Consider a nice, safe-centralized, and ?-homogenous GFG-tNCW B = h?, Q, q0, ?, ?i. For a state q ? Q, define [q] = {q0 ? Q : q ?B q0}. We define the tNCW C = h?, QC , [q0], ?C , ?C i, where QC = {[q] : q ? Q}, the transition function is such that h[q], ?, [p]i ? ?C iff there are q0 ? [q] and p0 ? [p] such that hq0, ?, p0i ? ?, and h[q], ?, [p]i ? ?C iff hq0, ?, p0i ? ?. Note that B being ?-homogenous implies that ?C is well defined; that is, independent of the choice of q0 and p0. To see why, assume that hq0, ?, p0i ? ?? and let q00 be a state in [q]. As q0 ?B q00, we have by Proposition 4 that there is p00 ? [p] such that hq00, ?, p00i ? ??. Thus, as B is ?-homogenous, there is no ?-labeled ?-transition from q00 to a state in [p]. Note that we have proved that if h[q], ?, [p]i is an ??-transition of C, then for every q0 ? [q], there is p0 ? [p] such that hq0, ?, p0i is an ??-transition of B, and thus the ?-direction of the following proposition, suggesting that a safe run in C induces a safe run in B, follows by a simple induction. The ?-direction follows immediately from the definition of C. I Proposition 17. For every [p] ? QC and every s ? [p], it holds that Lsafe(Bs) = Lsafe(C[p]). We extend Propositions 1 and 13 to the setting of B and C: I Proposition 18. Consider states s ? Q and [p] ? QC , a letter ? ? ?, and transitions hs, ?, s0i and h[p], ?, [p0]i of B and C, respectively. If s ? p, then s0 ? p0. Proof. As h[p], ?, [p0]i is a transition of C, there are states t ? [p] and t0 ? [p0], such that ht, ?, t0i ? ?. If s ? p, then s ? t. Since B is nice, in particular, semantically deterministic, and hs, ?, s0i ? ?, we get by Proposition 1 that s0 ? t0. Thus, as t0 ? p0, we are done. J I Proposition 19. For every [p] ? QC and s ? [p], we have that C[p] is a GFG-tNCW equivalent to Bs. 4 In fact, ?-homogeneity is not required, but as the GFG-tNCW BS obtained in Section 4 is ?-homogenous, which simplifies the proof, we are going to rely on it. Proof. We first prove that L(C[p]) ? L(Bs). Consider a word w = ?1?2 . . . ? L(C[p]). Let [p0], [p1], [p2], . . . be an accepting run of C[p] on w. Then, there is i ? 0 such that [pi], [pi+1], . . . is a safe run of C[pi] on the suffix w[i + 1, ?]. Let s0, s1, . . . si be a run of Bs on the prefix w[1, i]. Note that s0 = s. Since s0 ? [p0], we have that s0 ? p0, and thus an iterative application of Proposition 18 implies that si ? pi. In addition, as w[i+1, ?] is in Lsafe(C[pi]), we get, by Proposition 17, that w[i + 1, ?] ? Lsafe(Bpi ). Since Lsafe(Bpi ) ? L(Bpi ) and si ? pi, we have that w[i + 1, ?] ? L(Bsi ). Hence, s0, s1, . . . si can be extended to an accepting run of Bs on w. Next, we prove that L(Bs) ? L(C[p]) and that C[p] is a GFG-tNCW. We do this by defining a strategy h : ?? ? QC such that for all words w ? L(Bs), we have that h(w) is an accepting run of C[p] on w. We define h as follows. Recall that B is nice. So, in particular, Bs is GFG. Let g be a strategy witnessing Bs?s GFGness. We define h(u) = [g(u)], for every finite word u ? ??. Consider a word w ? L(Bs), and consider the accepting run g(w) = g(w[1, 0]), g(w[1, 1]), g(w[1, 2]), . . . of Bs on w. Note that by the definition of C, we have that h(w) = [g(w[1, 0])], [g(w[1, 1])], [g(w[1, 2])], . . . is an accepting run of C[p] on w, and so we are done. J I Proposition 20. The GFG-tNCW C is nice, safe-centralized, and safe-minimal. The proof of the proposition is in the full version. The considerations are similar to those in the proof of Proposition 15. In particular, for safe minimality, note that for states q and s of B, we have that [q] ? [s] iff [q] - [s] and [s] - [q]. Thus, it is sufficient to prove that if [q] - [s] then q - s. Thus, we can now conclude the following: I Theorem 21. Every nice, safe-centralized, and ?-homogenous GFG-tNCW can be turned in polynomial time into an equivalent nice, safe-centralized, and safe-minimal GFG-tNCW. 6 Discussion We presented a polynomial minimization algorithm for GFG-tNCWs. In contrast, minimization of DCWs is NP-complete [33]. This raises a natural question, as to whether both relaxations of the problem, namely the consideration of GFG automata, rather than deterministic ones, and the consideration of transition-based acceptance, rather than state-based one, are crucial for efficiency. Our conjecture is that minimization of transition-based DCWs (and hence, also transition-based DBWs) can be solved in polynomial time. Thus, the relaxation to GFG is not needed. Our conjecture is based on the understanding that the quotient construction fails for automata on infinite words as it does not capture traversal of transitions. Moreover, the study of GFG automata so far shows that their behavior is similar to that of deterministic automata. In particular, it is not hard to see that the NP-hardness proof of Schewe for DBWs minimization applies also to GFG-NBWs. The use of transition-based acceptance is related to another open problem in the context of DBW minimization: is there a 2-approximation polynomial algorithm for it, that is one that generates a DBW that is at most twice as big as a minimal one. Note that a tight minimization for the transition-based case would imply a positive answer here. Note also that the vertex-cover problem, used in Schewe?s reduction has a polynomial 2-approximation. As described in Section 1, there is recently growing use of automata with transition-based acceptance. Our work here is another evidence to their usefulness. We find the study of minimization of GFG automata of interest also beyond being an intermediate result in the quest for efficient transition-based DBW minimization. Indeed, GFG automata are important in practice, as they are used in synthesis and control, and in the case of the co-B?chi acceptance condition, they may be exponentially more succinct than their deterministic equivalences. Another open problem, which is interesting from both the theoretical and practical points of view, is minimization of GFG-tNBW. Note that unlike the deterministic case, GFG-tNBW and GFG-tNCW are not dual. Also, experience shows that algorithms for GFG-tNBW and GFG-tNCW are quite different [ 3, 17, 4, 2 ]. Finally, recall that there may be different minimal tDCWs for a given language of infinite words. Our results show that the picture for minimal GFG-tNCWs is cleaner: Consider a language L ? ??, and let A be a minimal GFG-tNCW for L obtained by safe-centralizing and safe-minimizing a nice GFG-tNCW for it. Consider a nice minimal GFG-tNCW B for L. Then, the injection ? : S(A) ? S(B) from Lemma 7 is actually a bijection; that is, ? is one-to-one and onto. Indeed, for every safe component T ? S(A) it holds that |T | = |?(T )|. Moreover, as both A and B are nice, related safe components are isomorphic, thus there is an bijection ? : QA ? QB such that for every q ? QA, we have that q ? ?(q), and for every ??-transition hq, ?, si of A, we have that h?(q), ?, ?(s)i is an ??-transition of B. Thus, all nice minimal GFG-tNCWs for L have the same set of safe components, and they differ only in ?-transitions among these safe components. An interesting research direction is a study of these safe components and in particular a characterization of L by a congruence-based relation on finite words that is induced by them. 1 2 3 4 5 6 7 8 9 A. Badr , V. Geffert , and I. Shipman . Hyper-minimizing minimized deterministic finite state automata . ITA , 43 ( 1 ): 69 - 94 , 2009 . In Proc. 38th Conf. on Foundations of Software Technology and Theoretical Computer Science , volume 122 of LIPIcs , pages 16 : 1 - 16 : 14 . Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018 . U. Boker , D. Kuperberg , O. Kupferman , and M. Skrzypczak . Nondeterminism in the Presence of a Diverse or Unknown Future . In Proc. 40th Int. Colloq. on Automata, Languages, and Programming , volume 7966 of Lecture Notes in Computer Science, pages 89 - 100 , 2013 . U. Boker , O. Kupferman , and M. Skrzypczak . How Deterministic are Good-For-Games Automata? In Proc. 37th Conf. on Foundations of Software Technology and Theoretical Computer Science , volume 93 of Leibniz International Proceedings in Informatics (LIPIcs) , pages 18 : 1 - 18 : 14 , 2017 . Congress on Logic, Method, and Philosophy of Science . 1960 , pages 1 - 12 . Stanford University Press, 1962 . Th. Colcombet. The theory of stabilisation monoids and regular cost functions . In Proc. 36th Int. Colloq. on Automata, Languages, and Programming , volume 5556 of Lecture Notes in Computer Science, pages 139 - 150 . Springer, 2009 . A. Duret-Lutz , A. Lewkowicz , A. Fauchille , Th. Michaud, E. Renault, and L. Xu . Spot 2.0 - a framework for LTL and ?-automata manipulation . In 14th Int. Symp. on Automated Technology for Verification and Analysis , volume 9938 of Lecture Notes in Computer Science, pages 122 - 129 . Springer, 2016 . J. Esparza , O. Kupferman , and M.Y. Vardi . Verification. In Handbook AutoMathA, pages 549 - 588 . European Mathematical Society , 2018 . D. Giannakopoulou and F. Lerda . From States to Transitions: Improving Translation of LTL Formulae to B?chi Automata . In Proc. 22nd International Conference on Formal Techniques for Networked and Distributed Systems , volume 2529 of Lecture Notes in Computer Science, pages 308 - 326 . Springer, 2002 . S. Gurumurthy , R. Bloem , and F. Somenzi . Fair simulation minimization . In Proc. 14th Int. Conf. on Computer Aided Verification, volume 2404 of Lecture Notes in Computer Science, pages 610 - 623 . Springer, 2002 . S. Halamish and O. Kupferman . Minimizing Deterministic Lattice Automata. ACM Transactions on Computational Logic , 16 ( 1 ): 1 - 21 , 2015 . T.A. Henzinger , O. Kupferman , and S. Rajamani . Fair simulation. Information and Computation , 173 ( 1 ): 64 - 81 , 2002 . T.A. Henzinger and N. Piterman . Solving Games without Determinization . In Proc. 15th Annual Conf. of the European Association for Computer Science Logic , volume 4207 of Lecture Notes in Computer Science, pages 394 - 410 . Springer, 2006 . J.E. Hopcroft . An n log n algorithm for minimizing the states in a finite automaton . In Z. Kohavi, editor, The Theory of Machines and Computations , pages 189 - 196 . Academic Press, 1971 . Comput. Sci. , 24 ( 6 ): 815 - 830 , 2013 . T. Jiang and B. Ravikumar . Minimal NFA problems are hard . SIAM Journal on Computing , 22 ( 6 ): 1117 - 1141 , 1993 . D. Kuperberg and M. Skrzypczak . On Determinisation of Good-for-Games Automata . In Proc. 42nd Int. Colloq. on Automata, Languages, and Programming , pages 299 - 310 , 2015 . O. Kupferman and Y. Lustig. Lattice Automata . In Proc. 8th Int. Conf. on Verification, Model Checking, and Abstract Interpretation , volume 4349 of Lecture Notes in Computer Science, pages 199 - 213 . Springer, 2007 . Logic , 138 ( 1-3 ): 126 - 146 , 2006 . O. Kupferman and M.Y. Vardi . Safraless Decision Procedures . In Proc. 46th IEEE Symp. on Foundations of Computer Science , pages 531 - 540 , 2005 . L.H. Landweber . Decision Problems for ?-Automata . Mathematical Systems Theory , 3 : 376 - 384 , 1969 . W. Li , Sh. Kan , and Z. Huang . A Better Translation From LTL to Transition-Based Generalized B?chi Automata . IEEE Access , 5 : 27081 - 27090 , 2017 . C. L?ding . Efficient Minimization of Deterministic Weak Omega-Automata. Information Processing Letters , 79 ( 3 ): 105 - 109 , 2001 . A. Malcher . Minimizing finite automata is computationally hard . Theoretical Computer Science , 327 ( 3 ): 375 - 390 , 2004 . M. Mohri . Finite-State Transducers in Language and Speech Processing . Computational Linguistics , 23 ( 2 ): 269 - 311 , 1997 . G. Morgenstern . Expressiveness results at the bottom of the ?-regular hierarchy . M.Sc. Thesis , The Hebrew University, 2003 . In Proc. 3rd IEEE Symp. on Logic in Computer Science , pages 422 - 427 , 1988 . J. Myhill . Finite automata and the representation of events . Technical Report WADD TR-57-624 , pages 112 - 137 , Wright Patterson AFB , Ohio, 1957 . A. Nerode . Linear Automaton Transformations . Proceedings of the American Mathematical Society , 9 ( 4 ): 541 - 544 , 1958 . D. Niwinski and I. Walukiewicz. Relating hierarchies of word and tree automata . In Proc. 15th Symp. on Theoretical Aspects of Computer Science , volume 1373 of Lecture Notes in Computer Science. Springer, 1998 . M.O. Rabin and D. Scott . Finite automata and their decision problems . IBM Journal of Research and Development , 3 : 115 - 125 , 1959 . S. Safra . On the Complexity of ?-Automata . In Proc. 29th IEEE Symp. on Foundations of Computer Science , pages 319 - 327 , 1988 . S. Schewe . Beyond Hyper-Minimisation-Minimising DBAs and DPAs is NP-Complete . In Proc. 30th Conf. on Foundations of Software Technology and Theoretical Computer Science , volume 8 of Leibniz International Proceedings in Informatics (LIPIcs), pages 400 - 411 , 2010 . S. Sickert , J. Esparza , S. Jaax , and J. K?et?nsk? . Limit-Deterministic B?chi Automata for Linear Temporal Logic . In Proc. 28th Int. Conf. on Computer Aided Verification , volume 9780 of Lecture Notes in Computer Science, pages 312 - 332 . Springer, 2016 . R.E. Tarjan . Depth first search and linear graph algorithms . SIAM Journal of Computing , 1 ( 2 ): 146 - 160 , 1972 . M.Y. Vardi and P. Wolper . Reasoning about Infinite Computations. Information and Computation , 115 ( 1 ): 1 - 37 , 1994 .


This is a preview of a remote PDF: http://drops.dagstuhl.de/opus/volltexte/2019/10676/pdf/LIPIcs-ICALP-2019-100.pdf

Bader Abu Radi, Orna Kupferman. Minimizing GFG Transition-Based Automata, LIPICS - Leibniz International Proceedings in Informatics, 2019, 100:1-100:16, DOI: 10.4230/LIPIcs.ICALP.2019.100