# The Role of Visibility in Pursuit/Evasion Games

^{1}

^{2}

^{3}

^{*}

*Keywords:*mobile robotics; robot coordination; pursuit/evasion

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Department of Electrical and Computer Engineering, Aristotle University, GR 54248, Thessaloniki, Greece

Laboratoire J. A. Dieudonné, UMR CNRS-UNS No 7351, Université de Nice Sophia-Antipolis, Parc Valrose 06108 Nice Cedex 2, France

Department of Mathematics, Ryerson University, 350 Victoria St., Toronto, ON, M5B 2K3, Canada

Author to whom correspondence should be addressed.

Received: 3 September 2014
/
Revised: 6 November 2014
/
Accepted: 26 November 2014
/
Published: 8 December 2014

(This article belongs to the Special Issue Coordination of Robotic Systems)

The cops-and-robber (CR) game has been used in mobile robotics as a discretized model (played on a graph G) of pursuit/evasion problems. The “classic” CR version is a perfect information game: the cops’ (pursuer’s) location is always known to the robber (evader) and vice versa. Many variants of the classic game can be defined: the robber can be invisible and also the robber can be either adversarial (tries to avoid capture) or drunk (performs a random walk). Furthermore, the cops and robber can reside in either nodes or edges of G. Several of these variants are relevant as models or robotic pursuit/evasion. In this paper, we first define carefully several of the variants mentioned above and related quantities such as the cop number and the capture time. Then we introduce and study the cost of visibility (COV), a quantitative measure of the increase in difficulty (from the cops’ point of view) when the robber is invisible. In addition to our theoretical results, we present algorithms which can be used to compute capture times and COV of graphs which are analytically intractable. Finally, we present the results of applying these algorithms to the numerical computation of COV.

Pursuit/evasion (PE) and related problems (search, tracking, surveillance) have been the subject of extensive research in the last fifty years and much of this research is connected to mobile robotics [1]. When the environment is represented by a graph (for instance, a floorplan can be modeled as a graph, with nodes corresponding to rooms and edges corresponding to doors; similarly, a maze can be represented by a graph with edges corresponding to tunnels and nodes corresponding to intersections), the original PE problem is reduced to a graph game played between the pursuers and the evader.

In the current paper, inspired by Isler and Karnad’s recent work [2], we study the role of information in cops-and-robber (CR) games, an important version of graph-based PE. By “information” we mean specifically the players’ location. For example, we expect that when the cops know the robber’s location they can do better than when the robber is “invisible”. Our goal is to make precise the term “better”.

Reviews of the graph theoretic CR literature appear in [3,4,5]. In the “classical” CR variant [6] it is assumed that the cops always know the robber’s location and vice versa. The “invisible” variant, in which the cops cannot see the robber (but the robber always sees the cops) has received less attention in the graph theoretic literature; among the few papers which treat this case we mention [2,7,8,9] and also [10] in which both cops and robber are invisible.

Both the visible and invisible CR variants are natural models for discretized robotic PE problems; the connection has been noted and exploited relatively recently [2,8,11]. If it is further assumed that the robber is not actively trying to avoid capture (the case of drunk robber) we obtain a one-player graph game; this model has been used quite often in mobile robotics [12,13,14,15,16] and especially (when assuming random robber movement) in publications such as [17,18,19,20,21], which utilize partially observable Markov decision processes (POMDP, [22,23,24]). For a more general overview of pursuit/evasion and search problems in robotics, the reader is referred to [1]; some of the works cited in this paper provide a useful background to the current paper. Finally, several related works have also been published in the Distributed Algorithms community [25,26,27].

This paper is structured as follows. In Section 2 we present preliminary material, notation and the definition of the “classical” CR game; we also introduce several node and edge CR variants. In Section 3 we define rigorously the cop number and capture time for the classical CR game and the previously introduced CR variants. In Section 4 we study the cost of visibility (COV). In Section 5 we present algorithms which compute capture time and optimal strategies for several CR variants. In Section 6 we further study COV using computational experiments. Finally, in Section 7 we summarize and present our conclusions.

- We use the following notations for sets:$\phantom{\rule{4pt}{0ex}}\mathbb{N}$ denotes $\left\{1,2,\dots \right\}$; ${\mathbb{N}}_{0}$ denotes $\left\{0,1,2,\dots \right\}$; $\left[K\right]$ denotes $\left\{1,\dots ,K\right\}$; $A-B=\left\{x:x\in A,x\notin B\right\}$; $\left|A\right|$ denotes the cardinality of A (i.e., the number of its elements).
- A graph $G=(V,E)$ consists of a node set V and an edge set E, where every $e\in E$ has the form $e=\left\{x,y\right\}\subseteq V$. In other words, we are concerned with finite, undirected, simple graphs; in addition we will always assume that G is connected and that G contains n nodes: $\left|V\right|=n$. Furthermore, we will assume, without loss of generality, that the node set is $V=\left\{1,2,\dots ,n\right\}$. We let ${V}^{K}=\underset{K\phantom{\rule{4.pt}{0ex}}\text{times}}{\underbrace{V\times V\times \dots \times V}}$. We also define ${V}_{D}^{2}\subseteq {V}^{2}$ by ${V}_{D}^{2}=\{(x,x):x\in V\}$ (it is the set of “diagonal” node pairs).
- A directed graph (digraph) $G=(V,E)$ consists of a node set V and an edge set E, where every $e\in E$ has the form $e=\left(x,y\right)\in V\times V$. In other words, the edges of a digraph are ordered pairs.
- In graphs, the (open) neighborhood of some $x\in V$ is $N\left(x\right)=\left\{y:\left\{x,y\right\}\in E\right\}$; in digraphs it is $N\left(x\right)=\left\{y:\left(x,y\right)\in E\right\}$. In both cases, the closed neighborhood of $x$ is $N\left[x\right]=N\left(x\right)\cup \left\{x\right\}$.
- Given a graph $G=\left(V,E\right)$, its line graph $L\left(G\right)=\left({V}^{\prime},{E}^{\prime}\right)$ is defined as follows: the node set is ${V}^{\prime}=E$, i.e., it has one node for every edge of G; the edge set is defined by having the nodes $\left\{u,v\right\},\left\{x,y\right\}\in {V}^{\prime}$ connected by an edge $\left\{\left\{u,v\right\},\left\{x,y\right\}\right\}$ if and only if $\left|\left\{u,v\right\}\cap \left\{x,y\right\}\right|=1$ (i.e., if the original edges of G are adjacent).
- We will write $f\left(n\right)=o\left(g\left(n\right)\right)$ if and only if ${lim}_{n\to \infty}\frac{f\left(n\right)}{g\left(n\right)}=0$. Note that in this asymptotic notation n denotes the parameter with respect to which asymptotics are considered. So in later sections we will write $o\left(n\right)$, $o\left(M\right)$ etc.

The “classical” CR game can be described as follows. Player C controls K cops (with $K\ge 1$) and player R controls a single robber. Cops and robber are moved along the edges of a graph $G=\left(V,E\right)$ in discrete time steps $t\in {\mathbb{N}}_{0}$. At time t, the robber’s location is ${Y}_{t}\in V$ and the cops’ locations are ${X}_{t}=({X}_{t}^{1},{X}_{t}^{2},\dots ,{X}_{t}^{K})\in {V}^{K}$ (for $t\in {\mathbb{N}}_{0}$ and $k\in \left[K\right]$). The game is played in turns; in the 0-th turn first C places the cops on nodes of the graph and then R places the robber; in the t-th turn, for $t>0$, first C moves the cops to ${X}_{t}$ and then R moves the robber to ${Y}_{t}$. Two types of moves are allowed: (a) sliding along a single edge and (b) staying in place; in other words, for all t and k, either $\{{X}_{t-1}^{k},{X}_{t}^{k}\}\in E$ or ${X}_{t-1}^{k}={X}_{t}^{k}$; similarly, $\{{Y}_{t-1},{Y}_{t}\}\in E$ or ${Y}_{t-1}={Y}_{t}$. The cops win if they capture the robber, i.e., if there exist $t\in {\mathbb{N}}_{0}$ and $k\in \left[K\right]$ such that ${Y}_{t}={X}_{t}^{k}$; the robber wins if for all $t\in {\mathbb{N}}_{0}$ and $k\in \left[K\right]$ we have ${Y}_{t}\ne {X}_{t}^{k}$. In what follows we will describe these eventualities by the following “shorthand notation”: ${Y}_{t}\in {X}_{t}$ and ${Y}_{t}\notin {X}_{t}$ (i.e., in this notation we consider ${X}_{t}$ as a set of cop positions).

In the classical game both C and R are adversarial: C plays to effect capture and R plays to avoid it. But there also exist “drunk robber” versions, in which the robber simply performs a random walk on G such that, for all $\forall u,v\in V$ we have

$$Pr\left({Y}_{0}=u\right)=\frac{1}{n}\phantom{\rule{2.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{2.em}{0ex}}Pr\left({Y}_{t+1}=u|{Y}_{t}=v\right)=\left\{\begin{array}{cc}\frac{1}{\left|N\left(v\right)\right|}\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{4.pt}{0ex}}\text{only}\phantom{\rule{4.pt}{0ex}}\text{if}\phantom{\rule{4.pt}{0ex}}u\in N\left(v\right)\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}\right.$$

In this case we can say that no R player is present (or, following a common formulation, we can say that the R player is “Nature”).

If an R player exists, the cops’ locations are always known to him; on the other hand, the robber can be either visible (his location is known to C) or invisible (his location is unknown). Hence we have four different CR variants, as detailed in the following Table 1.

Adversarial Visible Robber | av-CR |

Adversarial Invisible Robber | ai-CR |

Drunk Visible Robber | dv-CR |

Drunk Invisible Robber | di-CR |

In all of the above CR variants both cops and robber move from node to node. This is a good model for entities (e.g., robots) which move from room to room in an indoor environment. There also exist cases (for example moving in a maze or a road network) where it makes more sense to assume that both cops and robber move from edge to edge. We will call the classical version of the edge CR game edge av-CR; it has attracted attention only recently [28]. Edge ai-CR, dv-CR and di-CR variants are also possible, in analogy to the node versions listed in the Table. Each of these cases can be reduced to the corresponding node variant, with the edge game taking place on the line graph $L\left(G\right)$ of G.

Two graph parameters which can be obtained from the av-CR game are the cop number and the capture time. In this section we will define these quantities in game theoretic terms (while this approach is not common in the CR literature, we believe it offers certain advantages in clarity of presentation) and also consider their extensions to other CR variants. Before examining each of these CR variants in detail, let us mention a particular modification which we will apply to all of them. Namely, we assume that (every variant of) the CR game is played for an infinite number of rounds. This is obviously the case if the robber is never captured; but we also assume that, in case the robber is captured at some time ${t}^{*}$, the game continues for $t\in \{{t}^{*}+1,{t}^{*}+2,\dots \}$ with the following restriction: for all $t\ge {t}^{*}$, we have ${Y}_{t}={X}_{t}^{{k}^{*}}$ (where ${k}^{*}$ is the number of cop who effected the capture). This modification facilitates the game theoretic analysis presented in the sequel; intuitively, it implies that after capture, the ${k}^{*}$-th cop forces the robber to “follow” him.

We will define cop number and capture time in game theoretic terms. To this end we must first define histories and strategies.

A particular instance of the CR game can be fully described by the sequence of cops and robber locations; these locations are fully determined by the C and R moves. So, if we let ${x}_{t}\in {V}^{K}$ (resp. ${y}_{t}\in V$) denote the nodes into which C (resp. R) places the cops (resp. the robber) at time t, then a history is a sequence ${x}_{0}{y}_{0}{x}_{1}{y}_{1}\dots $ . Such a sequence can have finite or infinite length; we denote the set of all finite length histories by ${H}_{*}^{\left(K\right)}$; note that there exists an infinite number of finite length sequences. By convention ${H}_{*}^{\left(K\right)}$ also includes the zero-length or null history, which is the empty sequence (this corresponds to the beginning of the game, when neither player has made a move, just before C places the cops on G), denoted by λ. Finally, we denote the set of all infinite length histories by ${H}_{\infty}^{\left(K\right)}$.

Since both cops and robber are visible and the players move sequentially, av-CR is a game of perfect information; in such a game C loses nothing by limiting himself to pure (i.e., deterministic) strategies [29]. A pure cop strategy is a function ${s}_{C}:{H}_{*}^{\left(K\right)}\to {V}^{K}$; a pure robber strategy is a function ${s}_{R}:{H}_{*}^{\left(K\right)}\to V$. In both cases the idea is that, given a finite length history, the strategy produces the next cop or robber move (note the dependence on K, the number of cops); for example, when the robber strategy ${s}_{R}$ receives the input ${x}_{0}$, it will produce the output ${y}_{0}={s}_{R}\left({x}_{0}\right)$; when it receives ${x}_{0}{y}_{0}{x}_{1}$, it will produce ${y}_{1}={s}_{R}\left({x}_{0}{y}_{0}{x}_{1}\right)$ and so on. We will denote the set of all legal cop strategies by ${\mathbf{S}}_{C}^{\left(K\right)}$ and the set of all legal robber strategies by ${\mathbf{S}}_{R}^{\left(K\right)}$; a strategy is “legal” if it only provides moves which respect the CR game rules. The set ${\tilde{\mathbf{S}}}_{C}^{\left(K\right)}\subseteq {\mathbf{S}}_{C}^{\left(K\right)}$ (resp. ${\tilde{\mathbf{S}}}_{R}^{\left(K\right)}\subseteq {\mathbf{S}}_{R}^{\left(K\right)}$ ) is the set of memoryless legal cop (resp. robber) strategies, i.e., strategies which only depend only on the current cops and robber positions; we will denote the memoryless strategies by Greek letters, e.g., ${\sigma}_{C}$, ${\sigma}_{R}$ etc. In other words

$$\begin{array}{cc}\hfill {\sigma}_{C}& \in {\tilde{\mathbf{S}}}_{C}^{\left(K\right)}\Rightarrow \left[\forall t:{x}_{t+1}={\sigma}_{C}\left({x}_{0}{y}_{0}\dots .{x}_{t}{y}_{t}\right)={\sigma}_{C}\left({x}_{t}{y}_{t}\right)\right]\hfill \\ \hfill {\sigma}_{R}& \in {\tilde{\mathbf{S}}}_{R}^{\left(K\right)}\Rightarrow \left[\forall t:{y}_{t+1}={\sigma}_{R}\left({x}_{0}{y}_{0}\dots .{x}_{t}{y}_{t}{x}_{t+1}\right)={\sigma}_{R}\left({y}_{t}{x}_{t+1}\right)\right]\hfill \end{array}$$

It seems intuitively obvious that both C and R lose nothing by playing with memoryless strategies (i.e., computing their next moves based on the current position of the game, not on its entire history). This is true but requires a proof. One approach to this proof is furnished in [30,31]. But we will present another proof by recognizing that the CR game belongs to the extensively researched family of reachability games [32,33].

A reachability game is played by two players (Player 0 and Player 1) on a digraph $\overline{G}=\left(\overline{V},\overline{E}\right)$; each node $v\in \overline{V}$ is a position and each edge is a move; i.e., the game moves from node to node (position) along the edges of the digraph. The game is described by the tuple $\left({\overline{V}}_{0},{\overline{V}}_{1},\overline{E},\overline{F}\right)$, where ${\overline{V}}_{0}\cup {\overline{V}}_{1}=\overline{V}$, ${\overline{V}}_{0}\cap {\overline{V}}_{1}=\varnothing $ and $\overline{F}\subseteq \overline{V}$. For $i\in \left\{0,1\right\}$, ${\overline{V}}_{i}$ is the set of positions (nodes) from which the i-th Player makes the next move; the game terminates with a win for Player 0 if and only if a move takes place into a node $v\in \overline{F}$ (the target set of Player 0); if this never happens, Player 1 wins. Here is a more intuitive description of the game: each move consists in sliding a token from one digraph node to another, along an edge; the i-th player slides the token if and only if it is currently located on a node $v\in {\overline{V}}_{i}$ ($i\in \left\{0,1\right\}$); Player 0 wins if and only if the token goes into a node $u\in \overline{F}$; otherwise Player 1 wins. The following is well known [32,33].

We can convert the av-CR game with K cops to an equivalent reachability game which is played on the CR game digraph. In this digraph every node corresponds to a position of the original CR game; a (directed) edge from node u to node v indicates that it is possible to get from position u to position v in a single move. The CR game digraph has three types of nodes.
and let ${\overline{E}}^{\left(K\right)}$ consist of all pairs $\left(u,v\right)$ where $u,v\in {\overline{V}}^{\left(K\right)}$ and the move from u to v is legal. Finally, we recognize that C’s target set is
i.e., the set of all positions in which the robber is in the same node as at least one cop.

- Nodes of the form $u=\left(x,y,p\right)$ correspond to positions (in the original CR game) with the cops located at $x\in {V}^{K}$, the robber at $y\in V$ and player $p\in \left\{C,R\right\}$ being next to move.
- There is single node $u=\left(\lambda ,\lambda ,C\right)$ which corresponds to the starting position of the game: neither the cops nor the robber have been placed on G; it is C’s turn to move (recall that λ denotes the empty sequence).
- Finally, there exist n nodes of the form $u=\left(x,\lambda ,R\right)$: the cops have just been placed in the graph (at positions $x\in {V}^{K}$) but the robber has not been placed yet; it is R’s turn to move.

$$\begin{array}{cc}\hfill {\overline{V}}_{0}^{\left(K\right)}& =\left\{\left(x,y,C\right):x\in {V}^{K}\cup \left\{\lambda \right\},y\in V\cup \left\{\lambda \right\}\right\}\hfill \\ \hfill {\overline{V}}_{1}^{\left(K\right)}& =\left\{\left(x,y,R\right):x\in {V}^{K}\cup \left\{\lambda \right\},y\in V\cup \left\{\lambda \right\}\right\}\hfill \\ \hfill {\overline{V}}^{\left(K\right)}& ={\overline{V}}_{0}^{\left(K\right)}\cup {\overline{V}}_{1}^{\left(K\right)}\hfill \end{array}$$

$${\overline{F}}^{\left(K\right)}=\left\{\left(x,y,p\right):x\in {V}^{K},y\in \left(V\cap x\right),p\in \left\{C,R\right\}\right\}$$

With the above definitions, we have mapped the classical CR game (played with K cops on the graph G) to the reachability game $\left({\overline{V}}_{0}^{\left(K\right)},{\overline{V}}_{1}^{\left(K\right)},{\overline{E}}^{\left(K\right)},{\overline{F}}^{\left(K\right)}\right)$. By Theorem 1, Player i (with $i\in \left\{0,1\right\}$) will have a winning set ${\overline{W}}_{i}^{\left(K\right)}\subseteq {\overline{V}}^{\left(K\right)}$, i.e., a set with the following property: whenever the reachability game starts at some $u\in {\overline{W}}_{i}^{\left(K\right)}$, then Player i has a winning strategy (it may be the case, for specific G and K that either of ${\overline{W}}_{0}^{\left(K\right)}$, ${\overline{W}}_{1}^{\left(K\right)}$ is empty). Recall that in our formulation of CR as a reachability game, Player 0 is C. In reachability terms, the statement “C has a winning strategy in the classical CR game” translates to “$\left(\lambda ,\lambda ,C\right)\in $ ${\overline{W}}_{0}^{\left(K\right)}$” and, for a given graph G, the validity of this statement will in general depend on K. It is clear that ${\overline{W}}_{0}^{\left(K\right)}$ is increasing with K:

$${K}_{1}\le {K}_{2}\Rightarrow {\overline{W}}_{0}^{\left({K}_{1}\right)}\subseteq {\overline{W}}_{0}^{\left({K}_{2}\right)}$$

It is also also clear that
because, if C has $\left|V\right|$ cops, he can place one in every $u\in V$ and win immediately. In fact, for $K=\left|V\right|$, we have ${\overline{W}}_{0}^{\left(\left|V\right|\right)}={\overline{V}}^{\left(K\right)}$, because from every position $\left(x,y,p\right)$, C can move the cops so that one cop resides in each $u\in V$, which guarantees immediate capture.

$$\text{\u201c}\left(\lambda ,\lambda ,C\right)\in {\overline{W}}_{0}^{\left(\left|V\right|\right)}\text{\u201d}\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}\text{true}\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{every}\phantom{\rule{4.pt}{0ex}}G=\left(V,E\right)$$

Based on Equations (2) and (3) we can define the cop number of G to be the minimum number of cops that guarantee capture; more precisely we have the following definition (which is equivalent to the “classical” definition of cop number [34]).

$$c\left(G\right)=min\left\{K:\left(\lambda ,\lambda ,C\right)\in {\overline{W}}_{0}^{\left(K\right)}\right\}$$

While a cop winning strategy ${s}_{C}$ guarantees that the token will go into (and remain in) ${\overline{F}}^{\left(K\right)}$, we still do not know how long it will take for this to happen. However, it is easy to prove that, if $K\ge c\left(G\right)$ and C uses a memoryless winning strategy, then no game position will be repeated until capture takes place. Hence the following holds.

Let us now turn from winning to time optimal strategies. To define these, we first define the capture time, which will serve as the CR payoff function.

$${T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)=min\left\{t:\exists k\in \left[K\right]\phantom{\rule{4.pt}{0ex}}\mathit{such}\phantom{\rule{4.pt}{0ex}}\mathit{that}\phantom{\rule{4.pt}{0ex}}{Y}_{t}={X}_{t}^{k}\right\}$$

We will assume that R’s payoff is ${T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$ and C’s payoff is $-{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$ (hence av-CR is a two-person zero-sum game). Note that capture time (i) obviously depends on K and (ii) for a fixed K is fully determined by the ${s}_{C}$ and ${s}_{R}$ strategies. Now, following standard game theoretic practice, we define optimal strategies.

$$\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)=\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$$

The value of the av-CR game played with K cops is the common value of the two sides of Equation (5) and we denote it ${T}^{\left(K\right)}\left({s}_{C}^{\left(K\right)},{s}_{R}^{\left(K\right)}|G\right)$.

We emphasize that the validity of Equation (5) is not known a priori. C (resp. R) can guarantee that he loses no more than ${inf}_{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{sup}_{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$ (resp. gains no less than ${sup}_{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{inf}_{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$). We always have

$$\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\le \underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$$

But, since av-CR is an infinite game (i.e., depending on ${s}_{C}$ and ${s}_{R}$, it can last an infinite number of turns) it is not clear that equality holds in Equation (6) and, even when it does, the existence of optimal strategies $\left({s}_{C}^{\left(K\right)},{s}_{R}^{\left(K\right)}\right)$ which achieve the value is not guaranteed.

In fact it can be proved that, for $K\ge c\left(G\right)$, av-CR has both a value and optimal strategies. The details of this proof will be reported elsewhere, but the gist of the argument is the following. Since av-CR is played with $K\ge c\left(G\right)$ cops, by Theorem 2, C has a memoryless strategy which guarantees the game will last no more than $\overline{T}\left(K;G\right)$ turns. Hence av-CR with $K\ge c\left(G\right)$ essentially is a finite zero-sum two-player game; it is well known [35] that every such game has a value and optimal memoryless strategies. In short, we have the following.

$${T}^{\left(K\right)}\left({\sigma}_{C}^{\left(K\right)},{\sigma}_{R}^{\left(K\right)}|G\right)=\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)=\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$$

Hence we can define the capture time of a graph to be the value of av-CR when played on G with $K=c\left(G\right)$ cops.

$$ct\left(G\right)=\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)=\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\mathbf{S}}_{R}^{\left(K\right)}}{sup}{T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$$

In this game the robber is visible and performs a random walk on G (drunk robber) as indicated by Equation (1). In the absence of cops, ${Y}_{t}$ is a Markov chain on V, with transition probability matrix P, where for every $u,v\in \{1,2,\dots ,|V\left|\right\}$ we have

$${P}_{u,v}=Pr\left({Y}_{t+1}=u|{Y}_{t}=v\right)$$

In the presence of one or more cops, ${\left\{{Y}_{t}\right\}}_{t=0}^{\infty}$ is a Markov decision process (MDP) [36] with state space $V\cup \left\{n+1\right\}$ (where $n+1$ is the capture state) and transition probability matrix $P\left({X}_{t}\right)$ (obtained from P as shown in [37]); in other words, ${X}_{t}$ is the control variable, selected by C.

Since no robber strategy is involved, the capture time on G only depends on the (K-cops strategy) ${s}_{C}$: namely:
which can also be written as
where $\mathbf{1}\left({Y}_{t}\notin {X}_{t}\right)$ equals 1 if ${Y}_{t}$ does not belong to ${X}_{t}$ (taken as a set of cop positions) and 0 otherwise. Since the robber performs a random walk on G, it follows that ${T}^{\left(K\right)}\left({s}_{C}|G\right)$ is a random variable, and C wants to minimize its expected value:

$${T}^{\left(K\right)}\left({s}_{C}|G\right)=min\left\{t:\exists k\in \left[K\right]\phantom{\rule{4.pt}{0ex}}\text{such}\phantom{\rule{4.pt}{0ex}}\text{that}\phantom{\rule{4.pt}{0ex}}{Y}_{t}={X}_{t}^{k}\right\}$$

$${T}^{\left(K\right)}\left({s}_{C}|G\right)=\sum _{t=0}^{\infty}\mathbf{1}\left({Y}_{t}\notin {X}_{t}\right)$$

$$E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)=E\left(\sum _{t=0}^{\infty}\mathbf{1}\left({Y}_{t}\notin {X}_{t}\right)\right)$$

The minimization of Equation (9) is a typical undiscounted, infinite horizon MDP problem. Using standard MDP results [36] we see that (i) C loses nothing by determining ${X}_{0},{X}_{1},\dots $ through a memoryless strategy ${\sigma}_{C}\left(x,y\right)$ and (ii) for every $K\ge 1$, $E\left({T}^{\left(K\right)}\left({\sigma}_{C}|G\right)\right)$ is well defined. Furthermore, for every $K\in \mathbb{N}$ there exists an optimal strategy ${\sigma}_{C}^{\left(K\right)}$ which minimizes $E\left({T}^{\left(K\right)}\left({\sigma}_{C}|G\right)\right)$; hence we have the following.

$$E\left({T}^{\left(K\right)}\left({\sigma}_{C}^{\left(K\right)}|G\right)\right)=\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)$$

$$dct\left(G\right)=\underset{{s}_{C}\in {\mathbf{S}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)$$

Note that, even though a single cop suffices to capture the drunk robber on any G, we have chosen to define $dct\left(G\right)$ to be the capture time for $K=c\left(G\right)$ cops; we have done this to make (in Section 4) an equitable comparison between $ct\left(G\right)$ and $dct\left(G\right)$.

This is not a perfect information game, since C cannot see R’s moves. Hence C and R must use mixed strategies ${s}_{C}$, ${s}_{R}$. A mixed strategy ${s}_{C}$ (resp. ${s}_{R}$) specifies, for every t, a conditional probability $Pr\left({X}_{t}|{X}_{0},{Y}_{0},\dots ,{Y}_{t-2},{X}_{t-1},{Y}_{t-1}\right)$ (resp. $Pr\left({Y}_{t}|{X}_{0},{Y}_{0},\dots ,{Y}_{t-1},{X}_{t}\right)$) according to which C (resp. R) selects his t-th move. Let ${\overline{\mathbf{S}}}_{C}^{\left(K\right)}$ (resp. ${\overline{\mathbf{S}}}_{R}^{\left(K\right)}$) be the set of all mixed cop (resp. robber) strategies. A strategy pair $\left({s}_{R},{s}_{C}\right)\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}\times {\overline{\mathbf{S}}}_{R}^{\left(K\right)}$, specifies probabilities for all events $\left({X}_{0}={x}_{0},\dots ,{X}_{t}={x}_{t},{Y}_{0}={y}_{0},\dots ,{Y}_{t}={y}_{t}\right)$ and these induce a probability measure which in turn determines R’s expected gain (and C’s expected loss), namely $E\left({T}^{\left(K\right)}\left({s}_{C}^{\left(K\right)},{s}_{R}^{\left(K\right)}|G\right)\right)$. Let us define

$$\begin{array}{cc}\hfill {\underline{v}}^{\left(K\right)}& =\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)\hfill \\ \hfill {\overline{v}}^{\left(K\right)}& =\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}E\left({T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)\hfill \end{array}$$

Similarly to av-CR, C (resp. R) can guarantee an expected payoff no greater than ${\overline{v}}^{\left(K\right)}$ (resp. no less than v${}^{\left(K\right)}$). If v${}^{\left(K\right)}={\overline{v}}^{\left(K\right)}$, we denote the common value by ${v}^{\left(K\right)}$ and call it the value of the ai-CR game (played on G, with K cops). A pair of strategies $\left({s}_{C}^{\left(K\right)},{s}_{R}^{\left(K\right)}\right)$ is called optimal if and only if $E\left({T}^{\left(K\right)}\left({s}_{C}^{\left(K\right)},{s}_{R}^{\left(K\right)}|G\right)\right)={v}^{\left(K\right)}$.

In [9] we have studied the ai-CR game and proved that it does indeed have a value and optimal strategies. We give a summary of the relevant argument; proofs can be found in [9].

First, invisibility does not increase the cop number. In other words, there is a cop strategy (involving $c\left(G\right)$ cops) which guarantees bounded expected capture time for every robber strategy ${s}_{R}$. More precisely, we have proved the following.

$$\forall K\ge c\left(G\right):\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}E\left({T}^{\left(K\right)}\left({\overline{s}}_{C}^{\left(K\right)},{s}_{R}|G\right)\right)<\infty $$

Now consider the “m-truncated” ai-CR game which is played exactly as the “regular” ai-CRbut lasts at most m turns. Strategies ${s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}$ and ${s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}$ can be used in the m-truncated game: C and R use them only until the m-th turn. Let R receive one payoff unit for every turn in which the robber is not captured; denote the payoff of the m-truncated game (when strategies ${s}_{C}$, ${s}_{R}$ are used) by ${T}_{m}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$. Clearly

$$\forall m\in \mathbb{N},{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)},{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}:{T}_{m}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\le {T}_{m+1}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\le {T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)$$

The expected payoff of the m-truncated game is $E\left({T}_{m}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)$. Because it is a finite, two-person, zero-sum game, the m-truncated game has a value and optimal strategies. Namely, the value is
and there exist optimal strategies ${s}_{C}^{\left(K,m\right)}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}$, ${s}_{R}^{\left(K,m\right)}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}$ such that

$${v}^{\left(K,m\right)}=\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}E\left({T}_{m}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)=\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}E\left({T}_{m}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)$$

$$E\left({T}_{m}^{\left(K\right)}\left({s}_{C}^{\left(K,m\right)},{s}_{R}^{\left(K,m\right)}|G\right)\right)={v}^{\left(K,m\right)}<\infty $$

In [9] we use the truncated games to prove that the “regular” ai-CR game has a value, an optimal C strategy and ε-optimal R strategies. More precisely, we prove the following.

$$\underset{m\to \infty}{lim}{v}^{\left(K,m\right)}={\underline{v}}^{\left(K\right)}={\overline{v}}^{\left(K\right)}={v}^{\left(K\right)}$$

Furthermore, there exists a strategy ${s}_{C}^{\left(K\right)}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}$ such that
and for every $\epsilon >0$ there exists an ${m}_{\epsilon}$ and a strategy ${s}_{R}^{\left(K,\epsilon \right)}$ such that

$$\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}E\left({T}^{\left(K\right)}\left({s}_{C}^{\left(K\right)},{s}_{R}\right)\right)={v}^{\left(K\right)}$$

$$\forall m\ge {m}_{\epsilon}:{v}^{\left(K\right)}-\epsilon \le \underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{sup}E\left({T}^{\left(K\right)}\left({s}_{C},{s}_{R}^{\left(K,\epsilon \right)}\right)|G\right)\le {v}^{\left(K\right)}$$

Having established the existence of ${v}^{\left(K\right)}$ we have the following.

$$c{t}_{i}\left(G\right)={v}^{\left(K\right)}=\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)=\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}\underset{{s}_{R}\in {\overline{\mathbf{S}}}_{R}^{\left(K\right)}}{sup}E\left({T}^{\left(K\right)}\left({s}_{C},{s}_{R}|G\right)\right)$$

In this game ${Y}_{t}$ is unobservable and drunk; call this the “regular” di-CR game and also introduce the m-truncated di-CR game. Both are one-player games or, equivalently, ${Y}_{t}$ is a partially observable MDP(POMDP) [36]. The target function is
$$E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)=E\left(\sum _{t=0}^{\infty}\mathbf{1}\left({Y}_{t}\notin {X}_{t}\right)\right)$$
which is exactly the same as Equation (9) but now ${Y}_{t}$ is unobservable. Equation (13) can be approximated by

$$E\left({T}_{m}^{\left(K\right)}\left({s}_{C}|G\right)\right)=E\left(\sum _{t=0}^{m}\mathbf{1}\left({Y}_{t}\notin {X}_{t}\right)\right)$$

The expected values in Equations (13) and (14) are well defined for every ${s}_{C}$. C must select a strategy ${s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}$ which minimizes $E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)$. This is a typical infinite horizon, undiscounted POMDP problem [36] for which the following holds.

$$E\left({T}^{\left(K\right)}\left({s}_{C}^{\left(K\right)}|G\right)\right)=\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)$$

Hence we can introduce the following.

$$dc{t}_{i}\left(G\right)=\underset{{s}_{C}\in {\overline{\mathbf{S}}}_{C}^{\left(K\right)}}{inf}E\left({T}^{\left(K\right)}\left({s}_{C}|G\right)\right)$$

As already mentioned, every edge CR variant can be reduced to the corresponding node variant played on $L\left(G\right)$, the line graph of G. Hence all the results and definitions of Section 3.1, Section 3.2, Section 3.3 and Section 3.4 hold for the edge variants as well. In particular, we have an edge cop number $\overline{c}\left(G\right)=c\left(L\left(G\right)\right)$ and capture times
In general, all of these “edge CR parameters” will differ from the corresponding “node CR parameters”.

$$\overline{ct}\left(G\right)=ct\left(L\left(G\right)\right),\phantom{\rule{1.em}{0ex}}\overline{dct}\left(G\right)=dct\left(L\left(G\right)\right),\phantom{\rule{1.em}{0ex}}{\overline{ct}}_{i}\left(G\right)=c{t}_{i}\left(L\left(G\right)\right),\phantom{\rule{1.em}{0ex}}{\overline{dct}}_{i}\left(G\right)=dc{t}_{i}\left(L\left(G\right)\right)$$

As already remarked, we expect that ai-CR is more difficult (from C’s point of view) than av-CR (the same holds for the drunk counterparts of this game). We quantify this statement by introducing the cost of visibility (COV).

Clearly, for every G we have ${H}_{a}\left(G\right)\ge 1$ and ${H}_{d}\left(G\right)\ge 1$ (i.e., it is at least as hard to capture an invisible robber than a visible one). The following theorem shows that in fact both ${H}_{a}\left(G\right)$ and ${H}_{d}\left(G\right)$ can become arbitrarily large. In proving the corresponding theorems, we will need the family of long star graphs ${S}_{N,M}$. For specific values of M and N, ${S}_{N,M}$ consists of N paths (we call these rays) each having M nodes, joined at a central node, as shown in Figure 1.
**Theorem 8.** For every $N\in \mathbb{N}$ we have ${H}_{a}\left({S}_{N,1}\right)=N$.

$\underset{\xaf}{\mathbf{(ii)}\mathbf{Computing}{\mathit{ct}}_{\mathit{i}}\left({\mathit{S}}_{\mathit{N},\mathbf{1}}\right)}$. Let us now show that in ai-CR we have ${\mathit{ct}}_{i}\left({S}_{N,1}\right)=N$. C places the cop at ${X}_{0}=0$ and R places the robber at some ${Y}_{0}=u\ne 0$. We will obtain ${\mathit{ct}}_{i}\left({S}_{N,1}\right)$ by bounding it from above and below. For an upper bound, consider the following C strategy. Since C does not know the robber’s location, he must check the leaf nodes one by one. So at every odd $t$ he moves the cop into some $u\in \left\{1,2,\dots ,N\right\}$ and at every even $t$ he returns to 0. Note that R cannot change the robber’s original position; in order to do this, the robber must pass through 0 but then he will be captured by the cop (who either is already in 0 or will be moved into it just after the robber’s move). Hence C can choose the nodes he will check on odd turns with uniform probability and without repetitions. Equivalently, we can assume that the order in which nodes are chosen by C is selected uniformly at random from the set of all permutations; further, we assume that R (who does not know this order) starts at some ${Y}_{0}=u\in \left\{1,\dots ,N\right\}$. Then we have

$${\mathit{ct}}_{i}\left({S}_{N,1}\right)\le \frac{1}{N}\xb71+\frac{1}{N}\xb73+\dots +\frac{1}{N}\xb7\left(2N-1\right)=N$$

For a lower bound, consider the following R strategy. The robber is initially placed at a random leaf that is different than the one selected by C (if the cop did not start at the center). Knowing this, the best C strategy is to check (in any order) all leaves without repetition. If the cop starts at the center, we get exactly the same sum as for the upper bound. Otherwise, we have
$\underset{\xaf}{\mathbf{(iii)}\mathbf{Computing}{\mathit{H}}_{\mathit{a}}\left({\mathit{S}}_{\mathit{N},\mathbf{1}}\right)}$. Hence, for all $N\in \mathbb{N}$ we have ${H}_{a}\left({S}_{N,1}\right)=\frac{{\mathit{ct}}_{i}\left({S}_{N,1}\right)}{\mathit{ct}\left({S}_{N,1}\right)}=N$ ☐

$${\mathit{ct}}_{i}\left({S}_{N,1}\right)\ge \frac{1}{N-1}\xb72+\frac{1}{N-1}\xb74+\dots +\frac{1}{N-1}\xb7\left(2N-2\right)=N$$

$${H}_{d}\left({S}_{N,M}\right)=(1+o\left(1\right))\frac{(2N-1)(N-1)+1}{N}\ge 2N-3$$

- With probability $(1-c+o(1\left)\right)/N$, the robber starts on the same ray as the cop but farther away from the center. Conditioning on this event, the expected capture time is $M(1-c+o(1\left)\right)/2$.
- With probability $(c+o(1\left)\right)/N$, the robber starts on the same ray as the cop but closer to the center. Conditioning on this event, the expected capture time is $M(c+o(1\left)\right)/2$.
- With probability $(N-1+o(1\left)\right)/N$, the robber starts on different ray than the cop. Conditioning on this event, the expected capture time is $(c+o(1\left)\right)M+M(1/2+o(1\left)\right)$.

$$(1+o\left(1\right))M\left(\frac{1-c}{N}\xb7\frac{1-c}{2}+\frac{c}{N}\xb7\frac{c}{2}+\frac{N-1}{N}\xb7\frac{2c+1}{2}\right)$$

$\underset{\xaf}{\mathbf{(ii)}\mathbf{Computing}{\mathit{dct}}_{\mathit{i}}\left({\mathit{S}}_{\mathit{N},\mathit{M}}\right)}$. The initial placement for the robber is the same as in the visible variant, that is, the uniform distribution is used. However, since the robber is now invisible, C has to check all rays. As before, by Chernoff bounds, with probability at least $1-{e}^{-{\mathit{cM}}^{1/3}}$ (for some constant $c>0$) during $O\left(M\right)$ steps the robber is always within distance $O\left({M}^{2/3}\right)$ from its initial position. If the robber starts at distance $\omega \left({M}^{2/3}\right)$ from the center, he will thus with probability at least $1-{e}^{-{\mathit{cM}}^{1/3}}$ not change his ray during $O\left(M\right)$ steps. Otherwise, he might change from one ray to the other with bigger probability, but note that this happens only with the probability of the robber starting at distance $O\left({M}^{2/3}\right)$ from the center, and thus with probability at most $O\left({M}^{-1/3}\right)$. Keeping these remarks in mind, let us examine “reasonable” C strategies. It turns out there exist three such.

- With probability $(1+o(1\left)\right)/N$, the robber starts on the same ray as the cop. Conditioning on this event, the expected capture time is $(1+o(1\left)\right)M/2$.
- With probability $(1+o(1\left)\right)/N$, the robber starts on the $j$-th ray visited by the cop. Conditioning on this event, the expected capture time is $(1+o(1\left)\right)(M+2M(j-2)+M/2)$. ($M$ steps are required to move from the end of the first ray to the center, $2M$ steps are `wasted’ to check $j-2$ rays, and $M/2$ steps are needed to catch the robber on the $j$-th ray, on expectation.)

Hence, conditioned under not switching rays, the expected capture time in this case is

$$\begin{array}{c}(1+o\left(1\right))\frac{M}{N}\left(\frac{1}{2}+\left(1+\frac{1}{2}\right)+\left(3+\frac{1}{2}\right)+\dots +\left(1+2(N-2)+\frac{1}{2}\right)\right)\hfill \\ =(1+o\left(1\right))\frac{M}{N}\left(\frac{1}{2}+\left(2\xb71-\frac{1}{2}\right)+\left(2\xb72-\frac{1}{2}\right)+\dots +\left(2(N-1)-\frac{1}{2}\right)\right)\hfill \\ =(1+o\left(1\right))\frac{M}{N}\left(\frac{1}{2}+\frac{2N-1}{2}\xb7(N-1)\right)\hfill \\ =(1+o\left(1\right))\frac{M}{2}\xb7\frac{(2N-1)(N-1)+1}{N}\hfill \end{array}$$

Otherwise, if the robber is not caught, C just randomly checks rays: starting from the center, C chooses a random ray, goes until the end of the ray, returns to the center, and continues like this, until the robber is caught. The expected capture time in this case is

$$\sum _{j\ge 1}\left({(1-\frac{1}{N})}^{j-1}\frac{1}{N}\left(2(j-1)M+M/2\right)\right)=O\left(\mathit{MN}\right)=O\left(M\right)$$

Since this happens with probability $O\left({M}^{-1/3}\right)$, the contribution of the case where the robber switches rays is $o\left(M\right)$, and therefore for this strategy of $C$, the expected capture time is
**(ii.2)** Now suppose $C$ starts at the center of the ray, rather than the end, and checks all rays from there. By the same arguments as before, the capture time is
which is worse than in the case when starting at the end of a ray.

$$(1+o\left(1\right))\frac{M}{2}\xb7\frac{(2N-1)(N-1)+1}{N}$$

$$(1+o\left(1\right))\frac{M}{N}\left(\frac{1}{2}+\left(2+\frac{1}{2}\right)+\left(4+\frac{1}{2}\right)+\dots +\left(2+2(N-2)+\frac{1}{2}\right)\right)$$

$$\begin{array}{cc}\hfill (1+o\left(1\right))\frac{M}{N}& \left(\frac{{c}^{2}}{2}+\left(c+\frac{1}{2}\right)+\left(c+2+\frac{1}{2}\right)+\dots +\right.\hfill \\ & \left(\left(c+2(N-2)+\frac{1}{2}\right)+(1-c)\left(2c+2(N-1)+\frac{1-c}{2}\right)\right)\hfill \end{array}$$

$$\begin{array}{cc}\hfill (1+o\left(1\right))\frac{M}{N}& \left(\frac{({(1-c)}^{2}}{2}+c\left(2(1-c)+\frac{c}{2}\right)+\left(2(1-c)+c+\frac{1}{2}\right)+\dots +\right.\hfill \\ & \left(\left(2(1-c)+c+2(N-2)+\frac{1}{2}\right)\right)\hfill \end{array}$$

In short, the smallest capture time is achieved when C starts at the end of some ray and therefore
$\underset{\xaf}{\mathbf{(iii)}\mathbf{Computing}{\mathit{H}}_{\mathit{d}}\left({\mathit{S}}_{\mathit{N},\mathit{M}}\right)}$. It follows that for all $N\in \mathbb{N}-\left\{1\right\}$ we have
completing the proof. ☐

$${\mathit{dct}}_{i}\left({S}_{N,M}\right)=(1+o\left(1\right))\frac{M}{2}\xb7\frac{(2N-1)(N-1)+1}{N}$$

$${H}_{d}\left({S}_{N,M}\right)=\frac{{\mathit{dct}}_{i}\left({S}_{N,M}\right)}{\mathit{dct}\left({S}_{N,M}\right)}=(1+o\left(1\right))\frac{(2N-1)(N-1)+1}{N}\ge 2N-3$$

The cost of visibility in the edge CR games is defined analogously to that of node games.

Clearly, for every G we have ${\overline{H}}_{a}\left(G\right)\ge 1$ and ${\overline{H}}_{d}\left(G\right)\ge 1$. The following theorems show that in fact both ${\overline{H}}_{a}\left(G\right)$ and ${\overline{H}}_{d}\left(G\right)$ can become arbitrarily large. To prove these theorems we will use the previously introduced star graph ${S}_{N,1}$ and its line graph which is the clique ${K}_{N}$. These graphs are illustrated in Figure 2 for $N=6$.
**Theorem 10.** For every $N\in \mathbb{N}-\left\{1\right\}$ we have ${\overline{H}}_{a}\left({S}_{N,1}\right)=N-1$.

For an upper bound on $c{t}_{i}\left({K}_{N}\right)$, C might just move to a random vertex. If the robber stays still or if he moves to a vertex different from the one occupied by C, he will be caught in the next step with probability $1/(N-1)$, and thus an upper bound on the capture time is $N-1$.

For a lower bound, suppose that the robber always moves to a randomly chosen vertex, different from the one occupied by C, and including the one occupied by him now (that is, with probability $1/(N-1)$ he stands still, and after his turn, he is with probability $1/(N-1)$ at each vertex different from the vertex occupied by C. Hence C is forced to move, and since he has no idea where to go, the best strategy is also to move randomly, and the robber will be caught with probability $1/(N-1)$, yielding a lower bound on the capture time of $N-1$. Therefore

$$c{t}_{i}\left({K}_{N}\right)=N-1$$

Hence

$${\overline{H}}_{a}\left({S}_{N,1}\right)=\frac{{\overline{ct}}_{i}\left({S}_{N,1}\right)}{\overline{ct}\left({S}_{N,1}\right)}=\frac{c{t}_{i}\left({K}_{N}\right)}{ct\left({K}_{N}\right)}=N-1$$

☐

For $dc{t}_{i}\left({K}_{N}\right)$, it is clear that the strategy of constantly moving is best for the cop, as in this case there are two chances to catch the robber (either by moving towards him, or by afterwards the robber moving onto the cop). It does not matter where he moves to as long as he keeps moving, and we may thus assume that he starts at some vertex v and moves to some other vertex w in the first round, then comes back to v and oscillates like that until the end of the game. When the cop moves to another vertex, the probability that the robber is there is $1/(N-1)$. If he is still not caught, the robber moves to a random place, thereby selecting the vertex occupied by the cop with probability $1/(N-1)$. Hence, the probability to catch the robber in one step is $\frac{1}{N-1}+(1-\frac{1}{N-1})\frac{1}{N-1}=\frac{2N-3}{{(N-1)}^{2}}$. Thus, this time the capture time is a geometric random variable with probability of success equal to $\frac{2N-3}{{(N-1)}^{2}}$. We get $dc{t}_{i}\left({K}_{N}\right)=\frac{{(N-1)}^{2}}{2N-3}$ and so
which can become arbitrarily large by appropriate choice of N. ☐

$${\overline{H}}_{d}\left({S}_{N,1}\right)=\frac{{\overline{dct}}_{i}\left({S}_{N,1}\right)}{\overline{dct}\left({S}_{N,1}\right)}=\frac{dc{t}_{i}\left({K}_{N}\right)}{dct\left({K}_{N}\right)}=\frac{{(N-1)}^{2}/(2N-3)}{(N-1)/N}=\frac{N(N-1)}{2N-3}$$

For graphs of relatively simple structure (e.g., paths, cycles, full trees, grids) capture times and optimal strategies can be found by analytical arguments [9,37]. For more complicated graphs, an algorithmic solution becomes necessary. In this section we present algorithms for the computation of capture time in the previously introduced node CR variants. The same algorithms can be applied to the edge variants by replacing G with $L\left(G\right)$.

The av-CR capture time $ct\left(G\right)$ can be computed in polynomial time. In fact, stronger results have been presented by Hahn and MacGillivray; in [31] they present an algorithm which, given K, computes for every $\left(x,y\right)\in {V}^{2}$ the following:

- $C\left(x,y\right)$, the optimal game duration when the cop/robber configuration is $(x,y)$ and it is C’s turn to play;
- $R\left(x,y\right)$, the optimal game duration when the cop/robber configuration is $(x,y)$ and it is R’s turn to play.

The av-CR capture time can be computed by $ct\left(G\right)={min}_{x\in V}{max}_{y\in V}C\left(x,y\right)$; the optimal search strategies ${\widehat{\sigma}}_{C}$, ${\widehat{\sigma}}_{R}$ can also be easily obtained from the optimality equations, as will be seen a little later. We have presented in [37] an implementation of Hahn and MacGillivray’s algorithm, which we call CAAR (Cops Against Adversarial Robber). Below we present this, as Algorithm 1, for the case of a single cop (the generalization for more than one cop is straightforward).

The algorithm operates as follows. In lines 01-08 ${C}^{\left(0\right)}\left(x,y\right)$ and ${R}^{\left(0\right)}\left(x,y\right)$ are initialized to ∞, except for “diagonal” positions $\left(x,y\right)\in {V}_{D}^{2}$ (i.e., positions with $x=y$) for which we obviously have $C\left(x,x\right)=R\left(x,x\right)=0$. Then a loop is entered (lines 10-19) in which ${C}^{\left(i\right)}\left(x,y\right)$ is computed (line 12) by letting the cop move to the position which achieves the smallest capture time (according to the currently available estimate ${R}^{\left(i-1\right)}\left(x,y\right)$); ${R}^{\left(i\right)}\left(x,y\right)$ is computed similarly in line 13, looking for the largest capture time. This process is repeated until no further changes take place, at which point the algorithm exits the loop and terminates. This algorithm is a game theoretic version of value iteration [36], which we see again in Section 5.2. It has been proved in [31] that, for any graph G and any $K\in \mathbb{N}$, CAAR always terminates and the finally obtained $\left(C,R\right)$ pair satisfies the optimality equations

$$\begin{array}{c}\forall \left(x,y\right)\in {V}_{D}^{2}:C\left(x,y\right)=0;\phantom{\rule{1.em}{0ex}}\forall \left(x,y\right)\in {V}^{2}-{V}_{D}^{2}:C\left(x,y\right)=1+\underset{{x}^{\prime}\in N\left[x\right]}{min}R\left({x}^{\prime},y\right)\hfill \end{array}$$

$$\begin{array}{c}\forall \left(x,y\right)\in {V}_{D}^{2}:R\left(x,y\right)=0;\phantom{\rule{1.em}{0ex}}\forall \left(x,y\right)\in {V}^{2}-{V}_{D}^{2}:R\left(x,y\right)=1+\underset{{y}^{\prime}\in N\left[y\right]}{max}C\left(x,{y}^{\prime}\right)\hfill \end{array}$$

The optimal memoryless strategies ${\sigma}_{C}^{\left(K\right)}\left(x,y\right)$, ${\sigma}_{R}^{\left(K\right)}\left(x,y\right)$ can be computed for every position $(x,y)$ by letting ${\sigma}_{C}^{\left(K\right)}\left(x,y\right)$ (resp. ${\sigma}_{R}^{\left(K\right)}\left(x,y\right)$) be a node ${x}^{\prime}\in N\left[x\right]$ (resp. ${y}^{\prime}\in N\left[y\right]$) which achieves the minimum in Equation (15) (resp. maximum in Equation (16)). The capture time $ct\left(G\right)$ is computed from

$$ct\left(G\right)=\underset{x\in V}{min}\underset{y\in V}{max}C\left(x,y\right)$$

Algorithm 1: Cops Against Adversarial Robber (CAAR) |

Input: $G=(V,E)$ |

01 For All $\left(x,y\right)\in {V}_{D}^{2}$ |

02 ${C}^{\left(0\right)}\left(x,y\right)=0$ |

03 ${R}^{\left(0\right)}\left(x,y\right)=0$ |

04 EndFor |

05 For All $\left(x,y\right)\in {V}^{2}-{V}_{D}^{2}$ |

06 ${C}^{\left(0\right)}\left(x,y\right)=\infty $ |

07 ${R}^{\left(0\right)}\left(x,y\right)=\infty $ |

08 EndFor |

09 $i=1$ |

10 While $1>0$ |

11 For All $\left(x,y\right)\in {V}^{2}-{V}_{D}^{2}$ |

12 ${C}^{\left(i\right)}\left(x,y\right)=1+{min}_{{x}^{\prime}\in N\left[x\right]}{R}^{\left(i-1\right)}\left({x}^{\prime},y\right)$ |

13 ${R}^{\left(i\right)}\left(x,y\right)=1+{max}_{{y}^{\prime}\in N\left[y\right]}{C}^{\left(i\right)}\left(x,{y}^{\prime}\right)$ |

14 EndFor |

15 If ${C}^{\left(i\right)}={C}^{\left(i-1\right)}$ And $\phantom{\rule{4pt}{0ex}}{R}^{\left(i\right)}={R}^{\left(i-1\right)}$ |

16 Break |

17 EndIf |

18 $i\leftarrow i+1$ |

19 EndWhile |

20 $C={C}^{\left(i\right)}$ |

21 $R={R}^{\left(i\right)}$ |

Output: C, R |

For any given K, value iteration can be used to determine both $dct\left(G,K\right)$ and the optimal strategy ${\sigma}_{C}^{\left(K\right)}\left(x,y\right)$; one implementation is our CADR (Cops Against Drunk Robber) algorithm [37] which is a typical value-iteration [36] MDP algorithm; alternatively, CADR can be seen as an extension of the CAAR idea to the dv-CR. Below we present this, as Algorithm 2, for the case of a single cop (the generalization for more than one cops is straightforward).

Algorithm 2: Cops Against Drunk Robber (CADR) |

Input: $G=(V,E)$, ε |

01 For All $\left(x,y\right)\in {V}_{D}^{2}$ |

02 $\phantom{\rule{4pt}{0ex}}{C}^{\left(0\right)}\left(x,y\right)=0$ |

03 EndFor |

04 For All $\left(x,y\right)\in V-{V}_{D}^{2}$ |

05 $\phantom{\rule{4pt}{0ex}}{C}^{\left(0\right)}\left(x,y\right)=\infty $ |

06 EndFor |

07 $i=1$ |

08 While $1>0$ |

09 For All $\left(x,y\right)\in V-{V}_{D}^{2}$ |

10 ${C}^{\left(i\right)}\left(x,y\right)=1+{min}_{{x}^{\prime}\in N\left[x\right]}{\sum}_{{y}^{\prime}\in V}P\left(\left({x}^{\prime},y\right)\to \left({x}^{\prime},{y}^{\prime}\right)\right){C}^{\left(i-1\right)}\left({x}^{\prime},{y}^{\prime}\right)$ |

11 EndFor |

12 If ${max}_{\left(x,y\right)\in {V}^{2}}\left|{C}^{\left(i\right)}\left(x,y\right)-{C}^{\left(i-1\right)}\left(x,y\right)\right|<\epsilon $ |

13 Break |

14 EndIf |

15 $i\leftarrow i+1$ |

16 EndWhile |

17 $C={C}^{\left(i\right)}$ |

Output: C |

The algorithm operates as follows (again we use $C\left(x,y\right)$ to denote the optimal expected game duration when the game position is $\left(x,y\right)$). In lines 01-06 ${C}^{\left(0\right)}\left(x,y\right)$ is initialized to ∞, except for “diagonal”positions $\left(x,y\right)\in {V}_{D}^{2}$. In the main loop (lines 08-16) ${C}^{\left(i\right)}\left(x,y\right)$ is computed (line 10) by letting the cop move to the position which achieves the smallest expected capture time ($P\left(\left({x}^{\prime},y\right)\to \left({x}^{\prime},{y}^{\prime}\right)\right)$ in line 10 indicates the transition probability from $\left({x}^{\prime},y\right)$ to $\left({x}^{\prime},{y}^{\prime}\right)$). This process is repeated until the maximum change $\left|{C}^{\left(i\right)}\left(x,y\right)-{C}^{\left(i-1\right)}\left(x,y\right)\right|$ is smaller than the termination criterion ε, at which point the algorithm exits the loop and terminates. This is a typical value iteration MDP algorithm [36]; the convergence of such algorithms has been studied by several authors, in various degrees of generality [38,39,40]. A simple yet strong result, derived in [39], uses the concept of proper strategy: a strategy is called proper if it yields finite expected capture time. It is proved in [39] that, if a proper strategy exists for graph G, then CADR-like algorithms converge. In the case of dv-CR we know that C has a proper strategy: it is the random walking strategy ${\overline{s}}_{C}^{\left(K\right)}$ mentioned in Theorem 5. Hence CADR converges and in the limit, $C={lim}_{i\to \infty}{C}^{\left(i\right)}$ satisfies the optimality equations

$$\forall \left(x,y\right)\in {V}_{D}^{2}:C\left(x,y\right)=0;\phantom{\rule{1.em}{0ex}}\forall \left(x,y\right)\in {V}^{2}-{V}_{D}^{2}:C\left(x,y\right)=1+\underset{{x}^{\prime}\in N\left[x\right]}{min}\sum P\left(\left({x}^{\prime},y\right)\to \left({x}^{\prime},{y}^{\prime}\right)\right)C\left({x}^{\prime},{y}^{\prime}\right)$$

The optimal memoryless strategy ${\sigma}_{C}^{\left(K\right)}\left(x,y\right)$ can be computed for every position $(x,y)$ by letting ${\sigma}_{C}^{\left(K\right)}\left(x,y\right)$ be a node ${x}^{\prime}\in N\left[x\right]$ (resp. ${y}^{\prime}\in N\left[y\right]$) which achieves the minimum in Equation (15) (resp. maximum in Equation (16)). The capture time $dct\left(G\right)$ is computed from

$$dct\left(G\right)=\underset{x\in V}{min}C\left(x,y\right)$$

We have not been able to find an efficient algorithm for solving the ai-CR game. Several algorithms for imperfect information stochastic games could be used to this end but we have found that they are practical only for very small graphs. The problem is that for every game position (e.g., assuming one robber and one cop, for a triple $(x,y,p)$ indicating cop-position, robber-position and player to move) a full two-player, one-turn sub-game must be solved; this must be done for $2\xb7{\left|V\right|}^{2}$ positions and for sufficient iterations to achieve convergence. The computational load quickly becomes unmanageable.

In the case of the drunk invisible robber we are also using a game tree search algorithm with pruning, for which some analytical justification can be provided. We call this the Pruned Cop Search (PCS) algorithm. Before presenting the algorithm we will introduce some notation and then prove a simple fact about expected capture time. We limit ourselves to the single cop case, since the extension to more cops is straightforward.

We let $\mathrm{x}={x}_{0}{x}_{1}{x}_{2}\dots \phantom{\rule{4pt}{0ex}}$ be an infinite history of cop moves. Letting t being the current time step, the probability vector $\mathbf{p}\left(t\right)$ contains the probabilities of the robber being in node $v\in V$ or in the capture state $n+1$; more specifically: $\mathbf{p}\left(t\right)=\left[{p}_{1}\left(t\right),\dots ,{p}_{v}\left(t\right),\dots ,{p}_{n}\left(t\right),{p}_{n+1}\left(t\right)\right]$ and ${p}_{v}\left(t\right)=Pr\left({y}_{t}=v|{x}_{0}{x}_{1}\dots {x}_{t}\right)$. Hence $\mathbf{p}\left(t\right)$ depends (as expected) on the finite cop history ${x}_{0}{x}_{1}\dots {x}_{t}$. The expected capture time is denoted by $C\left(\mathbf{x}\right)=E\left(T\right|\mathbf{x})$; the conditioning is on the infinite cop history. The PCS algorithm works because $E\left(T|\mathbf{x}\right)$ can be approximated from a finite part of $\mathbf{x}$, as explained below. We have
$\mathbf{x}$ in the conditioning is the infinite history $\mathrm{x}={x}_{0}{x}_{1}{x}_{2}\dots $ . However, for every t we have

$$C\left(\mathbf{x}\right)=E\left(T|\mathbf{x}\right)=\sum _{t=0}^{\infty}t\xb7Pr\left(T=t|\mathbf{x}\right)=\sum _{t=0}^{\infty}Pr\left(T>t|\mathbf{x}\right)$$

$$Pr\left(T>t|\mathbf{x}\right)=1-Pr\left(T\le t|\mathbf{x}\right)=1-Pr\left(T\le t|{x}_{0}{x}_{1}\dots {x}_{t}\right)$$

Let us define
where ${p}_{n+1}\left(\tau \right)$ is the probability that the robber is in the capture state $n+1$ at time τ (the dependence on ${x}_{0}{x}_{1}\dots {x}_{\tau}$ is suppressed, for simplicity of notation). Then for all t we have

$${C}^{\left(t\right)}\left({x}_{0}{x}_{1}\dots {x}_{t}\right)=\sum _{\tau =0}^{t}\left[1-Pr\left(T\le \tau |{x}_{0}{x}_{1}\dots {x}_{\tau}\right)\right]=\sum _{\tau =0}^{t}\left[1-{p}_{n+1}\left(\tau \right)\right]$$

$${C}^{\left(t\right)}\left({x}_{0}{x}_{1}\dots {x}_{t}\right)={C}^{\left(t-1\right)}\left({x}_{0}{x}_{1}\dots {x}_{t-1}\right)+\left(1-{p}_{n+1}\left(t\right)\right)$$

Update Equation (19) can be computed using only the previous cost ${C}^{\left(t-1\right)}\left({x}_{0}{x}_{1}\dots {x}_{\tau -1}\right)$ and the (previously computed) probability vector $\mathbf{p}\left(t\right)$. While ${C}^{\left(t\right)}\left({x}_{0}\dots {x}_{t}\right)\le C\left(\mathbf{x}\right)$, we hope that (at least for the “good” histories) we have

$$\underset{t\to \infty}{lim}{C}^{\left(t\right)}\left({x}_{0}\dots {x}_{t}\right)=C\left(\mathbf{x}\right)$$

This approximation works well, with ${C}^{\left(t\right)}\left({x}_{0}\dots {x}_{t}\right)$ approaching its limiting value when t is in the range 15 to 20.

Below we present this, as Algorithm 3, in pseudocode. We have introduced a structure S with fields $S.\mathbf{x}$, $S.\mathbf{p}$, $S.C=C\left(S.\mathbf{x}\right)$. Also we denote concatenation by the & symbol, i.e., ${x}_{0}{x}_{1}\dots {x}_{t}\&v={x}_{0}{x}_{1}\dots {x}_{t}v$.

Algorithm 3: Pruned Cop Search (PCS) |

Input: $G=(V,E)$, ${x}_{0}$, ${J}_{max}$, ε |

01 $t=0$ |

02 $S.\mathbf{x}={x}_{0}$, $S.\mathbf{p}=Pr\left({y}_{0}\right|{x}_{0})$, $S.C=0$ |

03 $\mathbf{S}=\left\{S\right\}$ |

04 ${C}_{best}^{old}=0$ |

05 While $1>0$ |

06 $\tilde{\mathbf{S}}=\varnothing $ |

07 For All $S\in \mathbf{S}$ |

08 $\mathbf{x}=S.\mathbf{x}$, $\mathbf{p}=S.\mathbf{p}$, $C=S.C$ |

09 For All $v\in N\left[{x}_{t}\right]$ |

10 ${\mathbf{x}}^{\prime}=\mathbf{x}\&v$ |

11 ${\mathbf{p}}^{\prime}=\mathbf{p}\xb7P\left(v\right)$ |

12 ${C}^{\prime}=\mathbf{Cost}({\mathbf{x}}^{\prime},{\mathbf{p}}^{\prime},C)$ |

13 ${S}^{\prime}.\mathbf{x}={\mathbf{x}}^{\prime}$, ${S}^{\prime}.\mathbf{p}={\mathbf{p}}^{\prime}$, ${S}^{\prime}.C={C}^{\prime}$ |

14 $\tilde{\mathbf{S}}=\tilde{\mathbf{S}}\cup \left\{{S}^{\prime}\right\}$ |

15 EndFor |

16 EndFor |

17 $\mathbf{S}=\mathbf{Prune}(\tilde{\mathbf{S}},{J}_{max})$ |

18 $[{\mathbf{x}}_{best},{C}_{best}]=\mathbf{Best}\left(\mathbf{S}\right)$ |

19 If $|{C}_{best}-{C}_{best}^{old}|<\epsilon $ |

20 Break |

21 Else |

22 ${C}_{best}^{old}={C}_{best}$ |

23 $t\leftarrow t+1$ |

24 EndIf |

25 EndWhile |

Output: ${\mathbf{x}}_{best}$, ${C}_{best}=C\left({\mathbf{x}}_{best}\right)$. |

The PCS algorithm operates as follows. At initialization (lines 01-04), we create a single S structure (with $S.\mathbf{x}$ being the initial cop position, $S.\mathbf{p}$ the initial, uniform robber probability and $S.C=0$) which we store in the set $\mathbf{S}$. Then we enter the main loop (lines 05-25) where we pick each available cop sequence $\mathbf{x}$ of length t (line 08). Then, in lines 09-15 we compute, for all legal extensions ${\mathbf{x}}^{\prime}=\mathbf{x}\&v$ (where $v\in N\left[{x}_{t}\right]$) of length $t+1$ (line 10), the corresponding ${\mathbf{p}}^{\prime}$ (line 11) and ${C}^{\prime}$ (by the subroutine $\mathbf{Cost}$ at line 12). We store these quantities in ${S}^{\prime}$ which is placed in the temporary storage set $\tilde{\mathbf{S}}$ (lines 13–14). After exhausting all possible extensions of length $t+1$, we prune the temporary set $\tilde{\mathbf{S}}$, retaining only the ${J}_{max}$ best cop sequences (this is done in line 17 by the subroutine **Prune** which computes “best” in terms of smallest $C\left(\mathbf{x}\right)$). Finally, the subroutine $\mathbf{Best}$ in line 18 computes the overall smallest expected capture time ${C}_{best}=C\left({\mathbf{x}}_{best}\right)$. The procedure is repeated until the termination criterion $|{C}_{best}-{C}_{best}^{old}|<\epsilon $ is satisfied. As explained above, the criterion is expected to be always eventually satisfied because of Equation (20).

We now present numerical computations of the drunk cost of visibility for graphs which are not amenable to analytical computation. We do not deal with the adversarial cost of visibility because, while we can compute $ct\left(G\right)$ with the CAAR algorithm, we do not have an efficient algorithm to compute $c{t}_{i}\left(G\right)$; hence we cannot perform experiments on ${H}_{a}\left(G\right)=\frac{c{t}_{i}\left(G\right)}{ct\left(G\right)}$. The difficulty with $c{t}_{i}\left(G\right)$ is that ai-CR is a stochastic game of imperfect information; even for very small graphs, one cop and one robber, ai-CR involves a state space with size far beyond the capabilities of currently available stochastic games algorithms (see [41]). In Section 6.1 we deal with node games and in Section 6.2 with edge games.

Since ${H}_{d}\left(G\right)=\frac{dc{t}_{i}\left(G\right)}{dct\left(G\right)}$, we use the CADR algorithm to compute $dct\left(G\right)$ and the PCS algorithm to compute $dc{t}_{i}\left(G\right)$. We use graphs G obtained from indoor environments, which we represent by their floorplans. In Figure 3 we present a floorplan and its graph representation. The graph is obtained by decomposing the floorplan into convex cells, assigning each cell to a node and connecting nodes by edges whenever the corresponding cells are connected by an open space.

We have written a script which, given some parameters, generates random floorplans and their graphs. Every floorplan consists of a rectangle divided into orthogonal “rooms”. If each internal room were connected to its four nearest neighbors we would get an $M\times N$ grid ${G}^{\prime}$. However, we randomly generate a spanning tree ${G}_{T}$ of ${G}^{\prime}$ and initially introduce doors only between rooms which are connected in ${G}_{T}$. Our final graph G is obtained from ${G}_{T}$ by iterating over all missing edges and adding each one with probability ${p}_{0}\in \left[0,1\right]$. Hence each floorplan is characterized by three parameters: M, N and ${p}_{0}$.

We use the following pairs of $\left(M,N\right)$ values: (1,30), (2,15), (3,10), (4,7), (5,6). Four of these pairs give a total of 30 nodes and the pair ($M=4$, $N=7$) gives $n=28$ nodes; as $M/N$ increases, we progress from a path to a nearly square grid. For each $\left(M,N\right)$ pair we use five ${p}_{0}$ values: 0.00, 0.25, 0.50, 0.75, 1.00; note the progression from a tree (${p}_{0}=0.00$) to a full grid (${p}_{0}=1.00$). For each triple $\left(M,N,{p}_{0}\right)$ we generate 50 floorplans, obtain their graphs and for each graph G we compute $dct\left(G\right)$ using CADR, $dc{t}_{i}\left(G\right)$ using PCS and ${H}_{d}\left(G\right)=\frac{dc{t}_{i}\left(G\right)}{dct\left(G\right)}$; finally we average ${H}_{d}\left(G\right)$ over the 50 graphs. In Figure 4 we plot $dct\left(G\right)$ as a function of the probability ${p}_{0}$; each plotted curve corresponds to an $\left(M,N\right)$ pair. Similarly, in Figure 5 we plot $dc{t}_{i}\left(G\right)$ and in Figure 6 we plot ${H}_{d}\left(G\right)$.

We can see in Figure 4 and Figure 5 that both $dct\left(G\right)$ and $dc{t}_{i}\left(G\right)$ are usually decreasing functions of the $M/N$ ratio. However the cost of visibility ${H}_{d}\left(G\right)$ increases with $M/N$. This is due to the fact that, when the $M/N$ ratio is low, G is closer to a path and there is less difference in the search schedules and capture times between dv-CR and di-CR. On the other hand, for high $M/N$ ratio, G is closer to a grid, with a significantly increased ratio of edges to nodes (as compared to the low $M/N$, path-like instances). This, combined with the loss of information (visibility), results in ${H}_{d}\left(G\right)$ being an increasing function of $M/N$. The increase of ${H}_{d}\left(G\right)$ with ${p}_{0}$ can be explained in the same way, since increasing ${p}_{0}$ implies more edges and this makes the cops’ task harder.

Next we deal with ${\overline{H}}_{d}\left(G\right)=\frac{{\overline{dct}}_{i}\left(G\right)}{\overline{dct}\left(G\right)}$. We use graphs G obtained from mazes such as the one illustrated in Figure 7. Every corridor of the maze corresponds to an edge; corridor intersections correspond to nodes. The resulting graph G is also depicted in Figure 7. From G we obtain the line graph $L\left(G\right)$, to which we apply CADR to compute $dct\left(L\left(G\right)\right)=\overline{dct}\left(G\right)$ and PCS to compute $dc{t}_{i}\left(L\left(G\right)\right)={\overline{dct}}_{i}\left(G\right)$.

We use graphs of the same type as the ones of Section 6.1 but we now focus on the edge-to-edge movements of cops and robber. Hence from every G (obtained by a specific $(M,N,{p}_{0})$ triple) we produce the line graph $L\left(G\right)$, for which we compute ${H}_{d}\left(L\left(G\right)\right)$ using the CADR and PCS algorithms. Once again we generate 50 graphs and present average $dct\left(G\right)$, $dc{t}_{i}\left(G\right)$ and ${H}_{d}\left(G\right)$ results in Figure 8, Figure 9 and Figure 10. These figures are rather similar to Figure 4, Figure 5 and Figure 6, except that the increase of ${\overline{H}}_{d}\left(G\right)$ as a function of $M/N$ is greater than that of ${H}_{d}\left(G\right)$. This is due to the fact that $L\left(G\right)$ has more nodes and edges than G, hence the loss of visibility makes the edge game significantly harder than the node game. There is one exception to the above remarks, namely the case $(M,N)=(1,30)$; in this case both G and $L\left(G\right)$ are paths and ${H}_{d}\left(G\right)$ is essentially equal to ${\overline{H}}_{d}\left(G\right)$ (as can be seen by comparing Figure 6 and Figure 10).

In this paper we have studied two versions of the cops and robber game: the one is played on the nodes of a graph and the other played on the edges. For each version, we studied four variants, obtained by changing the visibility and adversariality assumptions regarding the robber; hence we have a total of eight CR games. For each of these we have defined rigorously the corresponding optimal capture time, using game theoretic and probabilistic tools.

Then, for the node games we have introduced the adversarial cost of visibility $H\left(G\right)=\frac{c{t}_{i}\left(G\right)}{ct\left(G\right)}$ and the drunk cost of visibility ${H}_{d}\left(G\right)=\frac{dc{t}_{i}\left(G\right)}{dct\left(G\right)}$ . These ratios quantify the increase in difficulty of the CR game when the cop is no longer aware of the robber’s position (this situation occurs often in mobile robotics).

We have defined analogous quantities ($\overline{H}\left(G\right)=\frac{\overline{{ct}_{i}}\left(G\right)}{\overline{ct}\left(G\right)}$, ${\overline{H}}_{d}\left(G\right)=\frac{{\overline{dct}}_{i}\left(G\right)}{\overline{dct}\left(G\right)}$) for the edge CR games.

We have studied analytically $H\left(G\right)$ and ${H}_{d}\left(G\right)$ and have established that both can get arbitrarily large. We have established similar results for $\overline{H}\left(G\right)$ and ${\overline{H}}_{d}\left(G\right)$. In addition, we have studied ${H}_{d}\left(G\right)$ and ${\overline{H}}_{d}\left(G\right)$ by numerical experiments which support both the game theoretic results of the current paper and the analytical computations of capture times presented in [9,37].

Each of the three authors of the paper has contributed to all aspects of the theoretical analysis. The numerical experiments were designed and implemented by Athanasios Kehagias.

The authors declare no conflict of interest.

- Chung, T.H.; Hollinger, G.A.; Isler, V. Search and pursuit-evasion in mobile robotics. Auton. Robots
**2011**, 31, 299–316. [Google Scholar] [CrossRef] - Isler, V.; Karnad, N. The role of information in the cop-robber game. Theor. Comput. Sci.
**2008**, 399, 179–190. [Google Scholar] [CrossRef] - Alspach, B. Searching and sweeping graphs: A brief survey. Le Matematiche
**2006**, 59, 5–37. [Google Scholar] - Bonato, A.; Nowakowski, R. The Game of Cops and Robbers on Graphs; AMS: Providence, RI, USA, 2011. [Google Scholar]
- Fomin, F.V.; Thilikos, D.M. An annotated bibliography on guaranteed graph searching. Theor. Comput. Sci.
**2008**, 399, 236–245. [Google Scholar] [CrossRef] - Nowakowski, R.; Winkler, P. Vertex-to-vertex pursuit in a graph. Discret. Math.
**1983**, 43, 235–239. [Google Scholar] [CrossRef] - Dereniowski, D.; Dyer, D.; Tifenbach, R.M.; Yang, B. Zero-visibility cops and robber game on a graph. In Frontiers in Algorithmics and Algorithmic Aspects in Information and Management; Springer: Berlin, Germany, 2013; pp. 175–186. [Google Scholar]
- Isler, V.; Kannan, S.; Khanna, S. Randomized pursuit-evasion with local visibility. SIAM J. Discret. Math.
**2007**, 20, 26–41. [Google Scholar] [CrossRef] - Kehagias, A.; Mitsche, D.; Prałat, P. Cops and invisible robbers: The cost of drunkenness. Theor. Comput. Sci.
**2013**, 481, 100–120. [Google Scholar] [CrossRef] - Adler, M.; Racke, H.; Sivadasan, N.; Sohler, C.; Vocking, B. Randomized pursuit-evasion in graphs. Lect. Notes Comput. Sci.
**2002**, 2380, 901–912. [Google Scholar] - Vieira, M.; Govindan, R.; Sukhatme, G.S. Scalable and practical pursuit-evasion. In Proceedings of the 2009 IEEE Second International Conference on Robot Communication and Coordination (ROBOCOMM’09), Odense, Denmark, 31 March–2 April 2009; pp. 1–6.
- Gerkey, B.; Thrun, S.; Gordon, G. Parallel stochastic hill-climbing with small teams. In Multi-Robot Systems. From Swarms to Intelligent Automata; Springer: Dordrecht, Netherlands, 2005; Volume III, pp. 65–77. [Google Scholar]
- Hollinger, G.; Singh, S.; Djugash, J.; Kehagias, A. Efficient multi-robot search for a moving target. Int. J. Robot. Res.
**2009**, 28, 201–219. [Google Scholar] [CrossRef] - Hollinger, G.; Singh, S.; Kehagias, A. Improving the efficiency of clearing with multi-agent teams. Int. J. Robot. Res.
**2010**, 29, 1088–1105. [Google Scholar] [CrossRef] - Lau, H.; Huang, S.; Dissanayake, G. Probabilistic search for a moving target in an indoor environment. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 3393–3398.
- Sarmiento, A.; Murrieta, R.; Hutchinson, S.A. An efficient strategy for rapidly finding an object in a polygonal world. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS 2003), Las Vegas, NV, USA, 27–31 October 2003; Volume 2, pp. 1153–1158.
- Hsu, D.; Lee, W.S.; Rong, N. A point-based POMDP planner for target tracking. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA 2008), Pasadena, CA, USA, 19–23 May 2008; pp. 2644–2650.
- Kurniawati, H.; Hsu, D.; Lee, W.S. Sarsop: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proceedings of Robotics: Science and Systems, Zurich, Switzerland, 25–28 June 2008.
- Pineau, J.; Gordon, G. POMDP planning for robust robot control. Robot. Res.
**2007**, 28, 69–82. [Google Scholar] - Smith, T.; Simmons, R. Heuristic search value iteration for POMDPs. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, Banff, Canada, 7–11 July 2004; pp. 520–527.
- Spaan, M.T.J.; Vlassis, N. Perseus: Randomized point-based value iteration for POMDPs. J. Artif. Intel. Res.
**2005**, 24, 195–220. [Google Scholar] - Hauskrecht, M. Value-function approximations for partially observable Markov decision processes. J. Artif. Intel. Res.
**2000**, 13, 33–94. [Google Scholar] - Littman, M.L.; Cassandra, A.R.; Kaelbling, L.P. Efficient Dynamic-Programming Updates in Partially Observable Markov Decision Processes; Technical Report CS-95-19; Brown University: Providence, RI, USA, 1996. [Google Scholar]
- Monahan, G.E. A survey of partially observable Markov decision processes: Theory, models, and algorithms. Manag. Sci.
**1982**, 28, 1–16. [Google Scholar] [CrossRef] - Canepa, D.; Potop-Butucaru, M.G. Stabilizing Flocking Via Leader Election in Robot Networks. In Proceedings of the 9th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2007), Paris, France, 14–16 November 2007; pp. 52–66.
- Gervasi, V.; Prencipe, G. Robotic Cops: The Intruder Problem. In Proceedings of the 2003 IEEE Conference on Systems, Man and Cybernetics (SMC 2003), Washington, DC, USA, 5–8 October 2003; pp. 2284–2289.
- Prencipe, G. The effect of synchronicity on the behavior of autonomous mobile robots. Theory Comput. Syst.
**2005**, 38, 539–558. [Google Scholar] [CrossRef] - Dudek, A.; Gordinowicz, P.; Pralat, P. Cops and robbers playing on edges. J. Comb.
**2013**, 5, 131–153. [Google Scholar] [CrossRef] - Kuhn, H.W. Extensive games. Proc. Natl. Acad. Sci. USA
**1950**, 36, 570–576. [Google Scholar] [CrossRef] [PubMed] - Bonato, A.Y.; Macgillivray, G. A General Framework for Discrete-Time Pursuit Games, preprint.
- Hahn, G.; MacGillivray, G. A note on k-cop, l-robber games on graphs. Discret. Math.
**2006**, 306, 2492–2497. [Google Scholar] [CrossRef] - Berwanger, D. Graph Games with Perfect Information, preprint.
- Mazala, R. Infinite games. Automata, Logics and Infinite Games
**2002**, 2500, 23–38. [Google Scholar] - Aigner, M.; Fromme, M. A game of cops and robbers. Discret. App. Math.
**1984**, 8, 1–12. [Google Scholar] [CrossRef] - Osborne, M.J. A Course in Game Theory; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
- Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley & Sons, Inc.: New York, NY, USA, 1994. [Google Scholar]
- Kehagias, A.; Prałat, P. Some remarks on cops and drunk robbers. Theor. Comput. Sci.
**2012**, 463, 133–147. [Google Scholar] [CrossRef] - De la Barrière, R.P. Optimal Control Theory: A Course in Automatic Control Theory; Dover Pubns: New York, NY, USA, 1980. [Google Scholar]
- Eaton, J.H.; Zadeh, L.A. Optimal pursuit strategies in discrete-state probabilistic systems. Trans. ASME Ser. D J. Basic Eng.
**1962**, 84, 23–29. [Google Scholar] [CrossRef] - Howard, R.A. Dynamic Probabilistic Systems, Volume Ii: Semi-Markov and Decision Processes; Dover Publications: New York, NY, USA, 1971. [Google Scholar]
- Raghavan, T.E.S.; Filar, J.A. Algorithms for stochastic games—A survey. Math. Methods Oper. Res.
**1991**, 35, 437–472. [Google Scholar] [CrossRef]

© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).